CN110381356B - Audio and video generation method and device, electronic equipment and readable medium - Google Patents

Audio and video generation method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN110381356B
CN110381356B CN201910657261.3A CN201910657261A CN110381356B CN 110381356 B CN110381356 B CN 110381356B CN 201910657261 A CN201910657261 A CN 201910657261A CN 110381356 B CN110381356 B CN 110381356B
Authority
CN
China
Prior art keywords
target
content
image content
audio
display page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910657261.3A
Other languages
Chinese (zh)
Other versions
CN110381356A (en
Inventor
何尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910657261.3A priority Critical patent/CN110381356B/en
Publication of CN110381356A publication Critical patent/CN110381356A/en
Application granted granted Critical
Publication of CN110381356B publication Critical patent/CN110381356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4665Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms involving classification methods, e.g. Decision trees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game

Abstract

The embodiment of the disclosure discloses an audio and video generation method and device, electronic equipment and a readable medium. Wherein, the method comprises the following steps: if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, acquiring the image content in a preset picture library, wherein the image content comprises video content and picture content; classifying the acquired image content, and respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories; selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of the user; and acquiring target image content selected by a user in the target display page, and generating audio and video according to the audio content and the target image content. Through the technical scheme of the embodiment of the disclosure, a user can conveniently and quickly position the required image content, and the generation efficiency of the audio and video is further improved.

Description

Audio and video generation method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, in particular to an audio and video generation method and device, an electronic device and a readable medium.
Background
In the K song application program, a user can watch the audio and video released by other users, and can also select favorite songs to record the audio and video and release the favorite songs.
Specifically, the user can input a favorite song name in the search box, and then click any K song options in the search result, so that the user can enter a singing interface of the song to record the song. In general, after the song is recorded, the user may select a recorded video or a photographed picture from the picture library, and further may generate an audio/video according to the audio of the recorded song and the recorded video or the photographed picture.
However, when the content in the picture library is large, the user needs to spend a lot of time to select the video content or the picture content, which is needed by the user, and the efficiency of generating the audio and video is low.
Disclosure of Invention
In view of this, the present disclosure provides an audio and video generation method and apparatus, an electronic device, and a readable medium, so as to improve the audio and video generation efficiency.
In a first aspect, an embodiment of the present disclosure provides an audio and video generating method, where the method includes:
if it is determined that the user selects to add the existing image content to the audio content of the recorded target song, acquiring the image content in a preset picture library, wherein the image content comprises video content and picture content;
classifying the acquired image content, and respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of the user;
and acquiring target image content selected by a user in the target display page, and generating audio and video according to the audio content and the target image content.
In a second aspect, an embodiment of the present disclosure provides an audio and video generating apparatus, where the method includes:
the image content acquisition module is used for acquiring image content in a preset picture library if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, wherein the image content comprises video content and picture content;
the image content processing module is used for classifying the acquired image content;
the processing result display module is used for respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
the page selection display module is used for selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of a user;
the target image content acquisition module is used for acquiring the target image content selected by the user in the target display page;
and the audio and video generation module is used for generating audio and video according to the audio content and the target image content.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the audio-video generation method according to any embodiment of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure provide a readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the audio and video generation method according to any embodiment of the present disclosure.
According to the audio and video generation method, the device, the electronic equipment and the readable medium provided by the embodiment of the disclosure, under the condition that the existing image content is added to the audio content of the target song selected by the user, the obtained image content in the preset picture library is classified, and the classification processing results are respectively displayed in the current display page and the pre-display page of the current display page according to the categories, so that the user can select the required image content in a sliding manner, and further, the audio and video is generated according to the target image content selected by the user and the audio content of the target song recorded by the user. Compared with the prior art, the method and the device have the advantages that the image content in the preset picture library is displayed in a classified mode, so that the user can conveniently and quickly position the required image content, and the generation efficiency of audio and video is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, a brief description will be given below to the drawings required for the embodiments or the technical solutions in the prior art, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 shows a flowchart of an audio and video generation method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a display of a classification result provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a display of a currently displayed page provided by an embodiment of the present disclosure;
FIG. 4 is a schematic display diagram of another currently displayed page provided by the embodiment of the present disclosure;
fig. 5 shows a flowchart of another audio/video generation method provided by the embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a display of a result of a sub-classification process according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram illustrating a display of another currently displayed page provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a display of image content under a sub-category provided by an embodiment of the present disclosure;
fig. 9 shows a schematic structural diagram of an audio/video generating apparatus provided in an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect. In the following embodiments, optional features and examples are provided in each embodiment, and various features described in the embodiments may be combined to form a plurality of alternatives, and each numbered embodiment should not be regarded as only one technical solution.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Fig. 1 shows a flowchart of an audio and video generation method provided by an embodiment of the present disclosure, which is applicable to a case where an audio and video is generated based on audio content of a recorded target song and existing image content in a preset picture library. The method is particularly suitable for the situation that under the condition that the existing image content in the preset picture library is more, the audio and video is generated based on the audio content of the recorded target song and the existing image content in the preset picture library. The method can be executed by the audio and video generating device or the electronic device provided by the embodiment of the disclosure, and the device can be realized by software and/or hardware. Optionally, the electronic device may be a server device carrying an audio/video generation function, or may be a terminal device configured with a karaoke application program provided by the server.
Optionally, as shown in fig. 1, the audio and video generating method provided in the embodiment of the present disclosure includes the following steps:
and S110, if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, acquiring the image content in a preset picture library.
In this embodiment, the preset photo library may be a preset photo library from which the user may select the existing image content. Optionally, the preset photo library may include at least one of: the system comprises a picture library configured by the user terminal equipment, a picture library in an application program (namely the picture library in the karaoke application program provided by the server), a cloud picture library of the user, a picture library of the server and the like. Wherein the image content includes video content and picture content.
Optionally, determining that the user selects to add the existing image content to the audio content of the recorded target song may be: if the song release event is detected, determining whether the user selects to add the existing content; and if so, determining that the user selects to add the existing image content to the audio content of the recorded target song. The song publishing event may be triggered and generated in a form of manual or voice by a user, and is used for requesting the server or a karaoke application program provided by the server to publish the recorded audio content of the target song.
The example of the server detecting the release event is described. After the audio content of the target song sung by the user is recorded, controlling a K song application program of the user to display the audio content on a preview interface of the audio content; at the moment, if the user is satisfied with the displayed audio content, the user can click a release button in a preview interface to release the audio content, and then the server can detect a release event through the Karaoke application program; and then, the K song application program can be controlled to jump from the preview interface to the release interface, and if the situation that the user clicks any one of the preset picture libraries displayed by the release interface is detected, the situation that the user selects to add the existing image content to the audio content of the recorded target song is determined.
Optionally, if it is determined that the user selects to add the existing image content to the audio content of the recorded target song, the preset picture library selected by the user may be called based on the fixed interface, and the image content may be further acquired from the preset picture library. For example, the image content in the preset photo library selected by the user can be directly copied to the local based on the fixed interface.
And S120, classifying the acquired image content, and respectively displaying the classification results in the current display page and the pre-display page of the current display page according to the categories.
Optionally, the obtained image content may be classified according to the category of the obtained image content. Further, the server or the karaoke application provided by the server may identify the category of the acquired image content by the storage format, the number of bytes, the specific identifier, and the like. The classification processing on the acquired image content may be: and classifying the acquired image content according to the storage format and/or the number of bytes.
For video content, the storage format may include, but is not limited to: mp4,. mov,. avi and. wmv, etc. For picture content, the storage format may include, but is not limited to: jpg,. png,. gif,. jpeg, and so on. And then the obtained image content can be classified according to the characteristics of different storage formats of the video content and the image content. In addition, the byte number of the general video content is larger than that of the image content, and then a byte number threshold value can be set through statistical analysis, and the image content larger than or equal to the byte number threshold value is classified into the video content; and classifying the image content smaller than the byte number threshold value as the picture content.
In order to facilitate the selection of the user, the present embodiment may display the classification processing result in the pre-display page of the current display page and the pre-display page of the current display page according to the category, that is, the video content and the picture content are displayed as two tags of the image content respectively. Optionally, the current display page and the pre-display page of the current display page are two parts of the same page. For example, fig. 2 shows a display diagram of a classification processing result provided by the embodiment of the present disclosure. Wherein, the default content display frame 20 is a current display page for displaying video content; the content display frame 21 is a pre-display page of the current display page, and is used for displaying the picture content. As shown in fig. 3, a schematic diagram of a content display frame 20 displayed in the karaoke application is provided. Optionally, the current presentation page and the pre-presentation page are relatively generic, changing dynamically as the user slides left and right. If the user slides to the left as in the case of fig. 3, the currently displayed page will become the case as shown in fig. 4.
Optionally, the current display page and the pre-display page of the current display page may also be two parts in the same page in the vertical direction, and then the user may slide up and down to select.
And S130, selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of the user.
Specifically, the target display page can be selected and displayed from the current display page and the pre-display page according to the sliding operation of the user from left to right or up and down. For example, if the user wants to select the picture content, the user can slide to the left under the condition shown in fig. 3, and then browse and select the desired picture content in the page, so that the server or the karaoke application provided by the server can take the pre-display page as the target display page as compared with fig. 3 according to the sliding operation of the user.
And S140, acquiring target image content selected by the user in the target display page, and generating audio and video according to the audio content and the target image content.
In this embodiment, the target display page may be used to display the picture content or the video content, and the target picture content is the target picture content or the target video content. Optionally, the target image content is different, and the mode of generating the audio/video according to the audio content and the target image content is different.
For example, if the target image content is the target video content, it may be determined whether there is audio in the target video content, and if not, audio and video may be generated directly according to the audio content and the target video content; if the target video content has audio, audio deleting operation can be performed on the target video content, and then audio and video are generated according to the audio content and the target image content after the deleting operation.
If the target image content is the target picture content and the number of the target picture content is 1, the target picture content can be directly inserted into the audio content, and then the audio and video are generated. And at the moment, the target picture content in the generated audio and video is static.
If the target image content is at least two target image contents, determining the playing sequence of the at least two target image contents according to the selection operation of the user in the target display page; and further generating audio and video according to the audio content, the at least two target picture contents and the playing sequence thereof. Optionally, the target picture content selected by the user is played in the generated audio/video. And at the moment, the target picture content in the generated audio and video is dynamic.
Further, the target image content is at least two target picture contents, and after the playing sequence of the at least two target picture contents is determined according to the selection operation of the user in the target display page, the playing times of the target picture contents and the playing time of each target picture content can be determined according to the playing duration of the audio content; and then audio and video can be generated according to the audio content, the at least two target picture contents and the playing sequence thereof, the playing times of each target picture content, the playing time of each time and the like.
According to the technical scheme provided by the embodiment of the disclosure, under the condition that the existing image content is added to the audio content of the recorded target song selected by the user, the acquired image content in the preset picture library is classified, and the classification processing result is respectively displayed in the current display page and the pre-display page of the current display page according to the category, so that the user can select the required image content in a sliding manner, and further audio and video are generated according to the target image content selected by the user and the audio content of the recorded target song. Compared with the prior art, the method and the device have the advantages that the image content in the preset picture library is displayed in a classified mode, so that the user can conveniently and quickly position the required image content, and the generation efficiency of audio and video is improved.
Fig. 5 shows a flowchart of another audio/video generation method provided by the embodiment of the present disclosure, and in this embodiment, optimization is performed on the basis of various optional solutions provided by the foregoing embodiment, specifically, how to display classification processing results in a current presentation page and a pre-presentation page of the current presentation page according to categories and obtain target image content selected by a user in the target presentation page in each step provided by the foregoing embodiment is described in detail in this embodiment.
Optionally, as shown in fig. 5, the audio/video generation method in this embodiment may include the following steps:
and S510, if it is determined that the user selects to add the existing image content to the audio content of the recorded target song, acquiring the image content in the preset picture library.
In this embodiment, the image content includes video content and picture content.
And S520, classifying the acquired image content.
S530, sub-classifying each classification processing result according to the set label.
In this embodiment, the setting tag may be preset, and may be used as a basis for classifying the classification processing result, such as the picture content, and may be adjusted according to the actual situation. Optionally, the setting tag may include at least one of: landscape, application name, portrait, time and place, etc. The application name is used to indicate the source of the picture content or video content, such as application a or application B.
Specifically, each classification processing result is sub-classified according to the set tag, so that each classification processing result comprises one or more sub-categories. Optionally, if any picture content or video content can belong to two or more sub-categories at the same time, the association degree between the picture content or video content and each sub-category can be determined, so as to belong the picture content or video content to the sub-category with the largest association degree.
And S540, displaying the result of each type of processing result after sub-classification in the current display page and the pre-display page of the current display page respectively.
Optionally, for each sub-category included in each category of processing results, the sub-category is displayed in the form of a content display frame, and a user clicking the content display frame displays all the content in the sub-category; further, the content in each sub-category may be presented sequentially in chronological order.
For example, the picture content and the video content are sub-classified according to the landscape, the application name, the portrait, and the like, so that the picture content and the video content include four sub-categories. Fig. 6 is a schematic diagram illustrating a display of a result of a sub-classification process according to an embodiment of the disclosure. Wherein each content presentation box is associated with a subcategory name, e.g., the content presentation box 60 currently presents a page for presenting video content; the content display frame 61 is a pre-display page of the current display page and is used for displaying picture content; the content presentation box 62 is used to present video content in the landscape subcategory, and the like. As shown in fig. 7, a schematic diagram of displaying a content display frame 61 in the karaoke application is provided.
And S550, selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of the user.
And S560, acquiring the target sub-category selected by the user in the target display page.
Specifically, if the target display page is the current display page as shown in fig. 7 and it is detected that the user clicks the content display frame 62 in the target display page, the subcategory landscape subcategory associated with the content display frame 62 may be used as the target subcategory.
And S570, displaying the image content under the target sub-category in the target display page.
For example, if the target display page is the current display page as shown in fig. 7, and the target sub-category is the landscape sub-category, the image content under the target sub-category is the video content under the landscape sub-category. Specifically, the video contents under the view subcategories can be sequentially displayed in the current display page according to the time sequence.
And S580, acquiring the target image content selected by the user from the image contents under the target subcategory.
Specifically, the target image content selected by the user from the image contents in the target sub-category can be acquired according to the clicking operation of the user. For example, as shown in fig. 8, if the view subcategory includes four video contents, which are video content a, video content B, video content C, and video content D; if it is detected that the user clicks on the video content a, the video content a may be taken as the target image content.
Due to the fact that the user may have mistaken clicking operation in the actual scene, the user experience is improved. Optionally, if the number of clicks of a certain image content in a certain target sub-category is an even number, it is determined that the user does not select the image content; and if the number of clicks of a user on a certain image content under a certain target sub-category is odd, determining that the user selects the image content.
It should be noted that, in the embodiment, the image content in the preset picture library is classified in two stages, so that even if the content in the picture library is more, the user can quickly locate the required image content, and the generation efficiency of the audio and video is further improved. Meanwhile, the probability that the user selects the wrong image content is reduced, the user is prevented from repeatedly selecting the image content for many times, and the user experience is improved.
And S590, generating the audio and video according to the audio content and the target image content.
According to the technical scheme provided by the embodiment of the disclosure, two types of processing results are obtained by classifying the image contents in the preset picture library; and then, sub-classifying each type of processing result again, and displaying the result of sub-classifying each type of processing result in the current display page and the pre-display page of the current display page respectively so that the user can select the required image content in a sliding manner, and further generating the audio and video according to the target image content selected by the user and the audio content of the recorded target song. Compared with the prior art, the two-stage classification display is carried out on the image contents in the preset picture library, so that a user can conveniently and quickly position the required image contents, and the generation efficiency of audio and video is improved. In addition, the probability that the user selects the wrong image content can be reduced, and the user experience is further prompted.
Fig. 9 is a schematic structural diagram of an audio/video generation apparatus provided in an embodiment of the present disclosure, which is applicable to a situation where an audio/video is generated based on audio content of a recorded target song and existing image content in a preset picture library. The method is particularly suitable for the situation that under the condition that the existing image content in the preset picture library is more, the audio and video is generated based on the audio content of the recorded target song and the existing image content in the preset picture library. The apparatus may be implemented by software and/or hardware, and may be configured on an electronic device. Optionally, the electronic device may be a server device carrying an audio and video generation function, or may be a terminal device configured with a karaoke application program provided by the server. As shown in fig. 9, the fine selection fragment processing apparatus in the embodiment of the present disclosure includes:
the image content obtaining module 910 is configured to, if it is determined that the user selects to add an existing image content to the audio content of the recorded target song, obtain an image content in a preset picture library, where the image content includes a video content and a picture content;
the image content processing module 920 is configured to perform classification processing on the acquired image content;
a processing result display module 930, configured to display the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
the page selection display module 940 is configured to select and display a target display page from a current display page and a pre-display page according to a sliding operation of a user;
a target image content obtaining module 950, configured to obtain target image content selected by a user in a target display page;
and the audio and video generating module 960 is configured to generate an audio and video according to the audio content and the target image content.
Illustratively, the preset picture library includes at least one of: the system comprises a picture library configured by user terminal equipment, a picture library in an application program, a cloud picture library of a user and a picture library of a server.
Illustratively, the image content processing module 920 may be specifically configured to:
and classifying the acquired image content according to the storage format and/or the number of bytes.
Illustratively, the processing result display module 930 may be specifically configured to:
performing sub-classification on each classification processing result according to a set label;
and respectively displaying the result of each type of processing result after sub-classification in the current display page and the pre-display page of the current display page.
Illustratively, the setting tag includes at least one of: landscape, application name, portrait, time, and place.
Illustratively, the target image content obtaining module 950 may be specifically configured to:
acquiring a target sub-category selected by a user in a target display page;
displaying image contents under the target sub-category in a target display page;
and acquiring target image content selected by the user from the image content under the target subcategory.
For example, if the target image content is at least two target image contents, the audio/video generation module 960 may specifically be configured to:
determining the playing sequence of at least two target picture contents according to the selection operation of a user in a target display page;
and generating audio and video according to the audio content, the at least two target picture contents and the playing sequence thereof.
For example, if the target image content is the target video content, the audio/video generation module 960 may specifically be configured to:
carrying out audio deleting operation on the target video content;
and generating audio and video according to the audio content and the target image content after the deletion operation.
Referring to fig. 10, a schematic structural diagram of an electronic device 1000 suitable for implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. Optionally, the electronic device in this embodiment may be a server device that carries an audio and video generation function, and may also be a terminal device that configures a karaoke application program provided by the server. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communication device 1009 may allow the electronic device 1000 to communicate with other devices wirelessly or by wire to exchange data. While fig. 10 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 1009, or installed from the storage means 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, acquiring the image content in a preset picture library, wherein the image content comprises video content and picture content; classifying the acquired image content, and respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories; selecting and displaying a target display page from a current display page and a pre-display page according to sliding operation of a user; and acquiring target image content selected by a user in the target display page, and generating an audio/video according to the audio content and the target image content.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
1. According to one or more embodiments of the present disclosure, there is provided an audio and video generation method including:
if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, acquiring the image content in a preset picture library, wherein the image content comprises video content and picture content;
classifying the acquired image content, and respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of the user;
and acquiring target image content selected by a user in the target display page, and generating audio and video according to the audio content and the target image content.
According to one or more embodiments of the present disclosure, the preset picture library in the above method includes at least one of: the system comprises a picture library configured by user terminal equipment, a picture library in an application program, a cloud picture library of a user and a picture library of a server.
According to one or more embodiments of the present disclosure, the classifying the acquired image content in the above method includes:
and classifying the acquired image content according to the storage format and/or the number of bytes.
According to one or more embodiments of the present disclosure, in the method, the step of displaying the classification processing result in the current display page and the pre-display page of the current display page according to the category includes:
performing sub-classification on each classification processing result according to a set label;
and respectively displaying the result of each type of processing result after sub-classification in the current display page and the pre-display page of the current display page.
According to one or more embodiments of the present disclosure, the setting of the tag in the above method includes at least one of: landscape, application name, portrait, time, and place.
According to one or more embodiments of the present disclosure, the acquiring target image content selected by a user in the target display page in the above method includes:
acquiring a target sub-category selected by a user in the target display page;
displaying the image content under the target sub-category in the target display page;
and acquiring target image content selected by the user from the image content under the target sub-category.
According to one or more embodiments of the present disclosure, in the method, if the target image content is at least two target image contents, generating an audio/video according to the audio content and the target image content, includes:
determining the playing sequence of the at least two target picture contents according to the selection operation of the user in the target display page;
and generating audio and video according to the audio content, the at least two target picture contents and the playing sequence thereof.
According to one or more embodiments of the present disclosure, in the method, if the target image content is the target video content, generating an audio/video according to the audio content and the target image content, includes:
carrying out audio deleting operation on the target video content;
and generating an audio/video according to the audio content and the target image content after the deletion operation.
2. According to one or more embodiments of the present disclosure, there is provided an audio/video generating apparatus including:
the image content acquisition module is used for acquiring image content in a preset picture library if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, wherein the image content comprises video content and picture content;
the image content processing module is used for classifying the acquired image content;
the processing result display module is used for respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
the page selection display module is used for selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of a user;
the target image content acquisition module is used for acquiring target image content selected by a user in the target display page;
and the audio and video generation module is used for generating audio and video according to the audio content and the target image content.
According to one or more embodiments of the present disclosure, the preset picture library in the above apparatus includes at least one of: the system comprises a picture library configured by user terminal equipment, a picture library in an application program, a cloud picture library of a user and a picture library of a server.
According to one or more embodiments of the present disclosure, the image content processing module in the above apparatus is specifically configured to:
and classifying the acquired image content according to the storage format and/or the number of bytes.
According to one or more embodiments of the present disclosure, the processing result display module in the apparatus is specifically configured to:
performing sub-classification on each classification processing result according to a set label;
and respectively displaying the result of each type of processing result after sub-classification in the current display page and the pre-display page of the current display page.
According to one or more embodiments of the present disclosure, the setting tag in the above apparatus includes at least one of: landscape, application name, portrait, time, and place.
According to one or more embodiments of the present disclosure, the target image content acquiring module in the above apparatus is specifically configured to:
acquiring a target sub-category selected by a user in the target display page;
displaying the image content under the target sub-category in the target display page;
and acquiring target image content selected by the user from the image content under the target sub-category.
According to one or more embodiments of the present disclosure, in the apparatus, if the target image content is at least two target image contents, the audio/video generation module is specifically configured to:
determining the playing sequence of the at least two target picture contents according to the selection operation of the user in the target display page;
and generating audio and video according to the audio content, the at least two target picture contents and the playing sequence thereof.
According to one or more embodiments of the present disclosure, in the apparatus, if the target image content is the target video content, the audio/video generation module is specifically configured to:
carrying out audio deleting operation on the target video content;
and generating an audio/video according to the audio content and the target image content after the deletion operation.
3. According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement any of the audio-video generation methods provided by the present disclosure.
4. According to one or more embodiments of the present disclosure, there is provided a readable medium having stored thereon a computer program which, when executed by a processor, implements the audio-video generation method according to any one of the aspects provided in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An audio-video generation method, characterized by comprising:
if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, acquiring the image content in a preset picture library, wherein the image content comprises video content and picture content;
classifying the acquired image content according to the storage format and/or the number of bytes of the image content, and respectively displaying the classification processing result in the current display page and the pre-display page of the current display page according to the category;
selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of a user;
and acquiring target image content selected by a user in the target display page, and generating audio and video according to the audio content and the target image content.
2. The method of claim 1, wherein the preset picture library comprises at least one of: the system comprises a picture library configured by user terminal equipment, a picture library in an application program, a cloud picture library of a user and a picture library of a server.
3. The method of claim 1, wherein displaying the classification processing results in the current presentation page and the pre-presentation page of the current presentation page according to categories, respectively, comprises:
performing sub-classification on each classification processing result according to a set label;
and respectively displaying the result of each type of processing result after sub-classification in the current display page and the pre-display page of the current display page.
4. The method of claim 3, wherein the setting tag comprises at least one of: landscape, application name, portrait, time, and place.
5. The method of claim 3, wherein obtaining the target image content selected by the user in the target presentation page comprises:
acquiring a target sub-category selected by a user in the target display page;
displaying the image content under the target sub-category in the target display page;
and acquiring target image content selected by the user from the image content under the target sub-category.
6. The method according to claim 1, wherein if the target image content is at least two target image contents, generating an audio/video according to the audio content and the target image content, comprises:
determining the playing sequence of the at least two target picture contents according to the selection operation of the user in the target display page;
and generating audio and video according to the audio content, the at least two target picture contents and the playing sequence thereof.
7. The method of claim 1, wherein if the target image content is a target video content, generating an audio/video according to the audio content and the target image content comprises:
carrying out audio deleting operation on the target video content;
and generating audio and video according to the audio content and the target image content after the deletion operation.
8. An audio-video generation device characterized by comprising:
the image content acquisition module is used for acquiring image content in a preset picture library if the fact that the user selects to add the existing image content to the audio content of the recorded target song is determined, wherein the image content comprises video content and picture content;
the image content processing module is used for classifying the acquired image content according to the storage format and/or the byte number of the image content;
the processing result display module is used for respectively displaying the classification processing results in the current display page and the pre-display page of the current display page according to the categories;
the page selection display module is used for selecting and displaying a target display page from the current display page and the pre-display page according to the sliding operation of a user;
the target image content acquisition module is used for acquiring target image content selected by a user in the target display page;
and the audio and video generation module is used for generating audio and video according to the audio content and the target image content.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the audio-visual generating method as claimed in any one of claims 1-7.
10. A readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the audio-video generation method according to any one of claims 1 to 7.
CN201910657261.3A 2019-07-19 2019-07-19 Audio and video generation method and device, electronic equipment and readable medium Active CN110381356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910657261.3A CN110381356B (en) 2019-07-19 2019-07-19 Audio and video generation method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910657261.3A CN110381356B (en) 2019-07-19 2019-07-19 Audio and video generation method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN110381356A CN110381356A (en) 2019-10-25
CN110381356B true CN110381356B (en) 2022-06-07

Family

ID=68254349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910657261.3A Active CN110381356B (en) 2019-07-19 2019-07-19 Audio and video generation method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN110381356B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901667B (en) * 2020-07-31 2021-08-20 腾讯科技(深圳)有限公司 Screen recording method and related device
CN114584716A (en) * 2022-03-08 2022-06-03 北京字跳网络技术有限公司 Picture processing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055845A (en) * 2010-11-30 2011-05-11 深圳市五巨科技有限公司 Mobile communication terminal and picture switching method of music player thereof
JP2012195952A (en) * 2012-05-30 2012-10-11 Sony Corp Image display control method, image display apparatus and program
CN104199876A (en) * 2014-08-20 2014-12-10 广州三星通信技术研究有限公司 Method and device for associating music and picture
CN105549847A (en) * 2015-12-10 2016-05-04 广东欧珀移动通信有限公司 Picture displaying method of song playing interface and user terminal
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN107170471A (en) * 2017-03-24 2017-09-15 联想(北京)有限公司 The processing method and electronic equipment of a kind of music background
CN108154889A (en) * 2016-12-02 2018-06-12 上海博泰悦臻电子设备制造有限公司 A kind of music control method, system, player and a kind of regulator control system
CN109327608A (en) * 2018-09-12 2019-02-12 广州酷狗计算机科技有限公司 Method, terminal, server and the system that song is shared

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008529345A (en) * 2005-01-20 2008-07-31 ロウェ,フレデリック System and method for generating and distributing personalized media
JP4828390B2 (en) * 2006-12-11 2011-11-30 アルパイン株式会社 In-vehicle audio apparatus and method for imaging and transmitting information of in-vehicle audio apparatus
KR20110005093A (en) * 2009-07-09 2011-01-17 삼성전자주식회사 Image processing method and apparatus for reducing compression noise
CN102780932B (en) * 2011-05-13 2016-08-03 上海信颐电子科技有限公司 Multiwindow player method and system
CN104050186A (en) * 2013-03-13 2014-09-17 厦门歌乐电子企业有限公司 Information classifying method and device
CN104135605B (en) * 2013-06-21 2015-08-05 腾讯科技(深圳)有限公司 Photographic method and device
US20170092323A1 (en) * 2014-03-10 2017-03-30 Paul Goldman Audio/Video Merge Tool
CN105468593A (en) * 2014-08-07 2016-04-06 小米科技有限责任公司 Picture display method and device
CN104244035B (en) * 2014-08-27 2018-10-02 南京邮电大学 Network video stream sorting technique based on multi-level clustering
CN105335458B (en) * 2015-09-23 2019-03-12 努比亚技术有限公司 Preview picture method and device
CN105338371B (en) * 2015-10-29 2019-05-17 四川奇迹云科技有限公司 A kind of multi-media transcoding dispatching method and device
CN107704519B (en) * 2017-09-01 2022-08-19 毛蔚青 User side photo album management system based on cloud computing technology and interaction method thereof
CN107765945A (en) * 2017-10-17 2018-03-06 广东欧珀移动通信有限公司 A kind of file management method, device, terminal and computer-readable recording medium
CN109922252B (en) * 2017-12-12 2021-11-02 北京小米移动软件有限公司 Short video generation method and device and electronic equipment
CN109618222B (en) * 2018-12-27 2019-11-22 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055845A (en) * 2010-11-30 2011-05-11 深圳市五巨科技有限公司 Mobile communication terminal and picture switching method of music player thereof
JP2012195952A (en) * 2012-05-30 2012-10-11 Sony Corp Image display control method, image display apparatus and program
CN104199876A (en) * 2014-08-20 2014-12-10 广州三星通信技术研究有限公司 Method and device for associating music and picture
CN105549847A (en) * 2015-12-10 2016-05-04 广东欧珀移动通信有限公司 Picture displaying method of song playing interface and user terminal
CN106649586A (en) * 2016-11-18 2017-05-10 腾讯音乐娱乐(深圳)有限公司 Playing method of audio files and device of audio files
CN108154889A (en) * 2016-12-02 2018-06-12 上海博泰悦臻电子设备制造有限公司 A kind of music control method, system, player and a kind of regulator control system
CN107170471A (en) * 2017-03-24 2017-09-15 联想(北京)有限公司 The processing method and electronic equipment of a kind of music background
CN109327608A (en) * 2018-09-12 2019-02-12 广州酷狗计算机科技有限公司 Method, terminal, server and the system that song is shared

Also Published As

Publication number Publication date
CN110381356A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
US11206448B2 (en) Method and apparatus for selecting background music for video shooting, terminal device and medium
CN111510760B (en) Video information display method and device, storage medium and electronic equipment
CN109976620B (en) Method, device, equipment and storage medium for determining list item display attribute information
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN110324718B (en) Audio and video generation method and device, electronic equipment and readable medium
US11928152B2 (en) Search result display method, readable medium, and terminal device
CN109684589B (en) Client comment data processing method and device and computer storage medium
WO2023011259A1 (en) Information display method and apparatus, electronic device, and storage medium
CN111970571B (en) Video production method, device, equipment and storage medium
WO2023088442A1 (en) Live streaming preview method and apparatus, and device, program product and medium
WO2023051294A9 (en) Prop processing method and apparatus, and device and medium
WO2023016349A1 (en) Text input method and apparatus, and electronic device and storage medium
CN112397104B (en) Audio and text synchronization method and device, readable medium and electronic equipment
CN114117282A (en) Information display method, device, equipment and storage medium
CN113727170A (en) Video interaction method, device, equipment and medium
CN113128185A (en) Interaction method and device and electronic equipment
CN110381356B (en) Audio and video generation method and device, electronic equipment and readable medium
CN109635131B (en) Multimedia content list display method, pushing method, device and storage medium
CN114584716A (en) Picture processing method, device, equipment and storage medium
WO2023134617A1 (en) Template selection method and apparatus, and electronic device and storage medium
WO2023088484A1 (en) Method and apparatus for editing multimedia resource scene, device, and storage medium
CN111310086A (en) Page jump method and device and electronic equipment
CN115269920A (en) Interaction method, interaction device, electronic equipment and storage medium
CN111221455B (en) Material display method and device, terminal and storage medium
CN114817631A (en) Media content distribution method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant