CN111797262A - Poetry generation method and device, electronic equipment and storage medium - Google Patents

Poetry generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111797262A
CN111797262A CN202010591279.0A CN202010591279A CN111797262A CN 111797262 A CN111797262 A CN 111797262A CN 202010591279 A CN202010591279 A CN 202010591279A CN 111797262 A CN111797262 A CN 111797262A
Authority
CN
China
Prior art keywords
poetry
target
image
processed
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010591279.0A
Other languages
Chinese (zh)
Inventor
崔志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202010591279.0A priority Critical patent/CN111797262A/en
Publication of CN111797262A publication Critical patent/CN111797262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a poetry generating method, apparatus, electronic device and storage medium, the method comprising: the method comprises the steps of obtaining an image to be processed input through a terminal, obtaining a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed, and determining a target poetry corresponding to the target keyword from a preset poetry set. The target poetry is determined by taking the target keywords corresponding to the target object in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly acquired by directly selecting the target poetry from the preset poetry set by utilizing the target keywords.

Description

Poetry generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of machine learning technologies, and in particular, to a poetry generating method, an apparatus, an electronic device, and a storage medium.
Background
With the rapid development of social networks, the favorite images of the users or the images shot by the users are displayed on the social platform, and the displayed images are matched with characters conforming to the themes of the images to express the current mood of the users, so that the social network is one of the enthusiasm social modes of the users. However, simply matching the image with the text corresponding to the subject cannot meet the user's requirement. Poetry is widely loved by people as a literary art for explaining soul, and in order to show the mood to be expressed by a user better through an image, when the user shows the image on a social platform, poetry which accords with the theme of the image can be matched with the displayed image. However, matching the displayed image with the poetry conforming to the subject thereof has a high requirement on the literary level of the user, and it is difficult for the ordinary user to quickly and accurately match the displayed image with the poetry conforming to the subject thereof.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a poetry generating method, apparatus, electronic device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a poetry generating method, the method including:
acquiring an image to be processed input through a terminal;
obtaining a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed;
and determining a target poetry corresponding to the target keyword from a preset poetry set.
Optionally, the obtaining, according to the image to be processed, a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model includes:
taking the image to be processed as the input of the image recognition model to obtain a keyword corresponding to each target object in the target objects and a confidence coefficient corresponding to each keyword;
and taking a preset number of keywords with the maximum confidence level in the keywords as the target keywords.
Optionally, the determining, from a preset poetry set, a target poetry corresponding to the target keyword includes:
determining a plurality of candidate poetry corresponding to the target keyword from the preset poetry set;
and determining the target poetry from the plurality of candidate poetry.
Optionally, the preset poetry set comprises a plurality of poetry, and determining a plurality of candidate poetry corresponding to the target keyword from the preset poetry set comprises:
and taking poems including the target keywords in the poems as the candidate poems.
Optionally, before determining a target poetry corresponding to the target keyword from the preset poetry set, the method further includes:
acquiring poetry demand information, wherein the poetry demand information comprises at least one of poetry type, poetry format and poetry quantity;
the determining the target poetry from the plurality of candidate poetry comprises:
and taking the candidate poetry matched with the poetry demand information in the plurality of candidate poetry as the target poetry.
Optionally, the determining the target poetry from the plurality of candidate poetry comprises:
and taking the candidate poetry selected by the user as the target poetry from the plurality of candidate poetry.
Optionally, the method further comprises:
and displaying the image to be processed and the target poetry.
Optionally, the image recognition model is trained by:
acquiring a sample training set, wherein the sample training set comprises training images and training keywords corresponding to the training images;
and training a preset training model according to the sample training set to obtain the image recognition model.
Optionally, the training a preset training model according to the sample training set to obtain the image recognition model includes:
and performing model training by taking the training images and the training keywords as training samples to obtain the image recognition model.
According to a second aspect of the embodiments of the present disclosure, there is provided a poetry generating apparatus, the apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire an image to be processed input through a terminal;
the recognition module is configured to obtain a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed;
the determining module is configured to determine target poetry corresponding to the target key words from a preset poetry set.
Optionally, the identification module comprises:
the first recognition submodule is configured to use the image to be processed as the input of the image recognition model, and obtain a keyword corresponding to each target object in the target objects and a confidence coefficient corresponding to each keyword;
and the second identification submodule is configured to take the preset number of keywords with the maximum confidence level in the keywords as the target keywords.
Optionally, the determining module includes:
the first determining sub-module is configured to determine a plurality of candidate poetry corresponding to the target keyword from the preset poetry set;
a second determining sub-module configured to determine the target poetry from the plurality of candidate poetry.
Optionally, the preset poetry set comprises a plurality of poetry, and the first determining sub-module is configured to use poetry including the target keyword in the plurality of poetry as the candidate poetry.
Optionally, the obtaining module is further configured to obtain poetry demand information before determining a target poetry corresponding to the target keyword from the preset poetry set, where the poetry demand information includes at least one of poetry type, poetry format and poetry quantity;
the second determining submodule is configured to take a candidate poetry matched with the poetry demand information in the plurality of candidate poetry as the target poetry.
Optionally, the second determining submodule is configured to take the candidate poetry selected by the user as the target poetry from the plurality of candidate poetry.
Optionally, the apparatus further comprises:
and the display module is configured to display the image to be processed and the target poetry.
Optionally, the image recognition model is trained by:
acquiring a sample training set, wherein the sample training set comprises training images and training keywords corresponding to the training images;
and training a preset training model according to the sample training set to obtain the image recognition model.
Optionally, the training a preset training model according to the sample training set to obtain the image recognition model includes:
and performing model training by taking the training images and the training keywords as training samples to obtain the image recognition model.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the poetry generation method provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the poetry generation method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the method, the image to be processed input through the terminal is obtained, the target key words corresponding to the target objects in the image to be processed are obtained through the pre-trained image recognition model according to the image to be processed, and finally the target poetry corresponding to the target key words is determined from the preset poetry set. The target poetry is determined by taking the target keywords corresponding to the target object in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly acquired by directly selecting the target poetry from the preset poetry set by utilizing the target keywords.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a poetry generation method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating one step 102 of the embodiment shown in fig. 1.
Fig. 3 is a flow chart illustrating one step 103 of the embodiment shown in fig. 1.
FIG. 4 is a flow chart illustrating another verse generation method in accordance with an exemplary embodiment.
FIG. 5 is a flow chart illustrating yet another verse generation method in accordance with an exemplary embodiment.
FIG. 6 is a flow diagram illustrating training an image recognition model according to an example embodiment.
Fig. 7 is a block diagram illustrating a poetry generating apparatus according to an exemplary embodiment.
FIG. 8 is a block diagram of an identification module shown in the embodiment of FIG. 7.
FIG. 9 is a block diagram of one type of determination module shown in the embodiment shown in FIG. 7.
Fig. 10 is a block diagram illustrating another poetry generating apparatus according to an exemplary embodiment.
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before introducing the poetry generating method, the poetry generating device, the electronic equipment and the storage medium provided by the present disclosure, application scenarios related to various embodiments of the present disclosure are first introduced. The application scene can be a scene that a user selects a target poetry for the image to be processed through a terminal. The terminal may be a mobile terminal such as a smart phone, a tablet computer, a smart watch, a smart bracelet, a PDA (Personal Digital Assistant, chinese), or a fixed terminal such as a desktop computer.
FIG. 1 is a flow chart illustrating a poetry generation method according to an exemplary embodiment. As shown in fig. 1, the method may include the following steps.
In step 101, a to-be-processed image input through a terminal is acquired.
For example, when a user needs to match poems corresponding to the theme of the image to be processed with the image to be processed, the image to be processed is input to the terminal, and the image to be processed is further processed, so that poems matched with the theme of the image to be processed are obtained. The acquisition of the poems matched with the theme of the image to be processed can be finished by the terminal or the server, and the image to be processed can be an image shot by a user through the terminal or an image directly selected by the user from a local storage of the terminal.
In step 102, according to the image to be processed, a target keyword corresponding to a target object in the image to be processed is obtained through a pre-trained image recognition model.
In this step, the image to be processed may be input into a pre-trained image recognition model, so as to obtain a target keyword corresponding to a target object in the image to be processed output by the image recognition model. For example, when the image to be processed is a landscape image captured by a user through a mobile phone, and the target objects included in the landscape image are flowers, small grasses and trees, the target keywords corresponding to the target objects in the image to be processed can be obtained as "flowers", "grasses" and "trees" through the image recognition model. The image recognition model may be, for example, a ResNet (english: Residual Network, chinese: Residual Network) model, a VGG (english: Visual Geometry Group Network) model, or any other model capable of recognizing a target keyword corresponding to a target object in the image to be processed, which is not specifically limited by the present disclosure.
In step 103, target poetry corresponding to the target keyword is determined from a preset poetry set.
For example, after the target keyword is obtained, the target keyword may be used as a theme of the image to be processed, and poetry matched with the target keyword is selected from a plurality of poetry included in a preset poetry set according to the target keyword. And then, screening poems matched with the target keywords according to preset poem selection conditions, and selecting poems meeting the conditions as target poems corresponding to the target keywords. For example, the poetry selection condition may be to select, as the target poetry, a poetry whose poetry format is a seven-language rhythm poetry from among poetry matched with the target keyword. In order to enable the determined target poetry to be accurately matched with the theme of the image to be processed, poetry included in the preset poetry set can be high-quality poetry selected after manual screening and investigation.
It should be noted that, determining the target poetry may be completed by the terminal, or may be completed by the server terminal. When the target poetry is determined to be completed by the terminal, the image recognition model and the preset poetry set can be stored locally in the terminal, the terminal can directly input the image to be processed into the image recognition model after acquiring the image to be processed, and the target poetry is determined from the preset poetry set according to the target keywords output by the image recognition model. When the target poetry is determined by the server, the image recognition model and the preset poetry set can be stored on the server, the terminal can send the image to be processed to the server after acquiring the image to be processed, and the server determines the target poetry by utilizing the image recognition model and the preset poetry set.
By adopting the scheme, the target poetry is determined by taking the target key words corresponding to the target objects in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly obtained by directly selecting the target poetry from the preset poetry set by utilizing the target key words.
Fig. 2 is a flow chart illustrating one step 102 of the embodiment shown in fig. 1. As shown in fig. 2, step 102 may include the steps of:
in step 1021, the image to be processed is used as an input of the image recognition model, and a keyword corresponding to each target object in the target objects and a confidence corresponding to each keyword are obtained.
For example, the way of determining the target keyword may be: firstly, an image vector corresponding to an image to be processed is generated according to the image to be processed, and the image vector is used as the input of an image recognition model, so that keywords corresponding to each target object in the target objects output by the image recognition model and the confidence corresponding to each keyword are obtained. For example, when the target objects included in the image to be processed are flowers, small grass trees and mountains, the image recognition model can obtain that the keywords corresponding to each target object in the image to be processed are "flowers", "grasses", "trees" and "mountains", and the confidence degrees corresponding to the keywords "flowers", "grasses", "trees" and "mountains" are 0.1, 0.15, 0.35 and 0.4, respectively.
In step 1022, a preset number of keywords with the highest confidence level among the keywords are used as the target keywords.
For example, after determining the keyword corresponding to each target object and the confidence corresponding to each keyword, a preset number of keywords with the highest confidence in the keywords may be used as the target keywords. For example, in the case where the confidence degrees of the keywords output by the image recognition model are 0.1, 0.15, 0.35, and 0.4, respectively, if the preset number is 2, the keywords having the confidence degrees of 0.35 and 0.4 are used as the target keywords. Furthermore, keywords with confidence degrees larger than or equal to a preset confidence degree threshold value can be used as target keywords. For example, when the confidence levels corresponding to the respective keywords output by the image recognition model are 0.1, 0.2, and 0.7, the keyword having the confidence level of 0.7 is regarded as the target keyword when the confidence level threshold is 0.5.
Fig. 3 is a flow chart illustrating one step 103 of the embodiment shown in fig. 1. As shown in fig. 3, step 103 may include the steps of:
in step 1031, a plurality of candidate poems corresponding to the target keyword are determined from the preset poem set.
For example, the way of determining the target poetry may be: first, poetry including a target keyword in a plurality of poetry including a preset poetry set can be used as candidate poetry according to the target keyword, so that a plurality of candidate poetry corresponding to the target keyword can be obtained. In order to further improve the accuracy of the selected candidate poetry, a mapping knowledge graph of target keywords and poetry keywords can be established in advance (the poetry keywords are preset keywords corresponding to poetry topics), after the target keywords are determined, the target poetry keywords with the highest relevance to the target keywords can be determined according to the target keywords, whether poetry containing the target poetry keywords exists in a plurality of poetry included in a preset poetry set or not is determined, and if the poetry exists, the poetry containing the target poetry keywords is used as the candidate poetry.
In another possible implementation manner, the manner of determining the target poetry may also be: the method comprises the steps that a keyword label is established for each poetry in a plurality of poetry included in a preset poetry set in advance, the keyword label comprises a poetry keyword corresponding to the theme of the poetry, after a target keyword is determined, the target keyword can be matched with the poetry keyword included by the keyword label, and when the poetry keyword identical to the target keyword exists, the poetry corresponding to the poetry keyword is used as the target poetry. For example, famous and classical poems are collected from the internet, and after a preset poem set is established through manual screening and troubleshooting, poems in the preset poem set can be labeled, poem keywords contained in each poem are marked out and serve as keyword labels corresponding to the poems.
In step 1032, a target poem is determined from the plurality of candidate poems.
Furthermore, after candidate poetry is determined, a plurality of candidate poetry can be screened according to preset poetry selection conditions, and poetry meeting the conditions is selected as target poetry. The poetry selection condition can be preset and can also be adjusted in real time according to specific requirements of users, for example, the preset condition can be that poetry with poetry types of ancient poetry is selected as target poetry.
FIG. 4 is a flow chart illustrating another verse generation method in accordance with an exemplary embodiment. As shown in fig. 4, before step 103, the method further comprises the steps of:
in step 104, poetry requirement information is obtained, and the poetry requirement information comprises at least one of poetry type, poetry format and poetry quantity.
In one scenario, before the target poetry is determined, a user can input poetry demand information to a terminal according to actual needs to adjust poetry selection conditions, so that the target poetry is selected. The poetry demand information includes at least one of poetry type, poetry format and poetry number (the poetry number may be 3 for example), the poetry type may include ancient poetry, close-body poetry and idiom for example, and the poetry format may include five-language absolute sentence, seven-language absolute sentence, five-language regular poetry for example.
Further, step 1032 may be implemented by:
and taking the candidate poetry matched with the poetry demand information in the plurality of candidate poetry as a target poetry.
In one scenario, a poetry label can be established in advance for each poetry in a plurality of poetry included in a preset poetry set, and the poetry label can include poetry types and poetry formats corresponding to the poetry. After a plurality of candidate poetry are determined, poetry demand information can be matched with poetry types and poetry formats included in poetry labels, so that candidate poetry matched with the poetry demand information is selected from the plurality of candidate poetry to serve as target poetry. For example, when the poetry demand information includes a poetry format and a poetry number, the poetry demand information includes a poetry format of a five-language rhythm poetry and the poetry number includes the poetry number of 3, 3 candidate poetry with the poetry format of the five-language rhythm poetry can be selected from a plurality of candidate poetry as the target poetry.
Optionally, step 1032 may also be implemented by:
and taking the candidate poetry selected by the user as a target poetry from the plurality of candidate poetry.
In another scenario, after a plurality of candidate poetry are determined, the candidate poetry can be displayed to a user, the user selects the candidate poetry, and the candidate poetry selected by the user is used as a target poetry. For example, the user may select the candidate poems by a designated APP (english: Application, chinese: Application) installed on the terminal or by a designated operation (for example, a long-press operation, a double-click operation, or an enlargement gesture operation in a designated area) performed on a display interface of the terminal.
FIG. 5 is a flow chart illustrating yet another verse generation method in accordance with an exemplary embodiment. As shown in fig. 5, the method further comprises the steps of:
in step 105, the image to be processed and the target poetry are presented.
For example, after the target poetry is determined, the image to be processed and the target poetry may be displayed on a display interface of the terminal, and further, the user may input a target display instruction to the terminal to enable the terminal to display the image to be processed and the target poetry according to the target display instruction, where the target display instruction may include a relative position of the target poetry and the image to be processed (for example, the target poetry may be located above the image to be processed), a font type of the target poetry (the font type may be, for example, song, black body, regular script, clerical script, etc.), a character color of the target poetry, and the like.
FIG. 6 is a flow diagram illustrating an image recognition model according to an exemplary embodiment. As shown in fig. 6, the image recognition model may be trained by:
in step 201, a sample training set is obtained, where the sample training set includes training images and training keywords corresponding to the training images.
For example, the way to train the image recognition model may be: firstly, a sample training set is obtained, wherein the sample training set may include training images and training keywords corresponding to the training images, and the training keywords may be keywords obtained by labeling target objects in the training images in advance. For example, when the training image is an image including the moon, the training keyword corresponding to the training image may be set to "month".
In step 202, a preset training model is trained according to a sample training set to obtain an image recognition model.
For example, after the sample training set is obtained, the training images and the training keywords may be used as training samples to perform model training, so as to obtain an image recognition model. In order to ensure that the image recognition model can accurately recognize the target keywords of the image to be processed, the trained image recognition model can be tested. For example, a test set composed of a test image and test keywords corresponding to the test image may be obtained, then each group of test image and test keywords is utilized to sequentially perform model test on the image recognition model, if the accuracy of the target keywords output by the image recognition model is greater than or equal to a preset accuracy threshold, it is determined that the accuracy of keyword recognition of the image recognition model meets the requirement, otherwise, it is determined that the accuracy of keyword recognition of the image recognition model does not meet the requirement, and the image recognition model needs to be trained again.
In summary, the target poetry is determined by taking the target keywords corresponding to the target objects in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly obtained by directly selecting the target poetry from the preset poetry set by using the target keywords.
Fig. 7 is a block diagram illustrating a poetry generating apparatus according to an exemplary embodiment. As shown in fig. 7, the apparatus 300 includes an obtaining module 301, a recognition module 302 and a determination module 303.
An obtaining module 301 configured to obtain an image to be processed input through a terminal.
The recognition module 302 is configured to obtain a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed.
A determining module 303, configured to determine a target poetry corresponding to the target keyword from a preset poetry set.
FIG. 8 is a block diagram of an identification module shown in the embodiment of FIG. 7. As shown in fig. 8, the identification module 302 includes:
the first recognition submodule 3021 is configured to use the image to be processed as an input of the image recognition model, and obtain a keyword corresponding to each target object in the target objects and a confidence corresponding to each keyword.
The second recognition submodule 3022 is configured to take a preset number of keywords with the highest confidence level among the keywords as the target keyword.
FIG. 9 is a block diagram of one type of determination module shown in the embodiment shown in FIG. 7. As shown in fig. 9, the determining module 303 includes:
a first determining sub-module 3031 is configured to determine a plurality of candidate poems corresponding to the target keyword from a preset poem set.
A second determining sub-module 3032 is configured to determine a target poetry from a plurality of candidate poetry.
Optionally, the preset poetry set includes a plurality of poetry, and the first determining submodule 3031 is configured to take poetry including the target keyword in the plurality of poetry as a candidate poetry.
Optionally, the obtaining module 301 is further configured to obtain poetry requirement information before determining a target poetry corresponding to the target keyword from the preset poetry set, where the poetry requirement information includes at least one of poetry type, poetry format and poetry quantity.
The second determining sub-module 3032 is configured to take a candidate poetry matched with the poetry demand information in the plurality of candidate poetry as a target poetry.
Optionally, the second determining sub-module 3032 is configured to take the candidate poetry selected by the user as the target poetry from the plurality of candidate poetry.
Fig. 10 is a block diagram illustrating another poetry generating apparatus according to an exemplary embodiment. As shown in fig. 10, the apparatus 300 further includes: a module 304 is shown.
And a presentation module 304 configured to present the image to be processed and the target poetry.
Optionally, the image recognition model is trained by:
and acquiring a sample training set, wherein the sample training set comprises training images and training keywords corresponding to the training images.
And training a preset training model according to the sample training set to obtain an image recognition model.
Optionally, training a preset training model according to the sample training set, and obtaining the image recognition model includes:
and performing model training by taking the training images and the training keywords as training samples to obtain an image recognition model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the target poetry is determined by taking the target keywords corresponding to the target objects in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly obtained by directly selecting the target poetry from the preset poetry set by using the target keywords.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the poetry generation method provided by the present disclosure.
In summary, the target poetry is determined by taking the target keywords corresponding to the target objects in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly obtained by directly selecting the target poetry from the preset poetry set by using the target keywords.
FIG. 11 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the poetry generation method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 806 provide power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the poetry generation method described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the poetry generation method described above, is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, there is also provided a computer program product comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the poetry generation method described above when executed by the programmable apparatus.
In summary, the target poetry is determined by taking the target keywords corresponding to the target objects in the image to be processed determined by the image recognition model as the theme of the image to be processed, the determined target poetry can be accurately matched with the theme of the image to be processed, and the target poetry can be quickly obtained by directly selecting the target poetry from the preset poetry set by using the target keywords.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A poetry generating method, characterized in that the method comprises:
acquiring an image to be processed input through a terminal;
obtaining a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed;
and determining a target poetry corresponding to the target keyword from a preset poetry set.
2. The method according to claim 1, wherein obtaining, according to the image to be processed, a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model comprises:
taking the image to be processed as the input of the image recognition model to obtain a keyword corresponding to each target object in the target objects and a confidence coefficient corresponding to each keyword;
and taking a preset number of keywords with the maximum confidence level in the keywords as the target keywords.
3. The method of claim 1, wherein the determining target poetry corresponding to the target keyword from a preset poetry set comprises:
determining a plurality of candidate poetry corresponding to the target keyword from the preset poetry set;
and determining the target poetry from the plurality of candidate poetry.
4. The method of claim 3, wherein the preset poetry set comprises a plurality of poetry, and wherein determining a plurality of candidate poetry corresponding to the target keyword from the preset poetry set comprises:
and taking poems including the target keywords in the poems as the candidate poems.
5. The method as claimed in claim 3, wherein before determining the target poetry corresponding to the target keyword from the preset poetry set, the method further comprises:
acquiring poetry demand information, wherein the poetry demand information comprises at least one of poetry type, poetry format and poetry quantity;
the determining the target poetry from the plurality of candidate poetry comprises:
and taking the candidate poetry matched with the poetry demand information in the plurality of candidate poetry as the target poetry.
6. The method of claim 3, wherein said determining said target poetry from said plurality of candidate poetry comprises:
and taking the candidate poetry selected by the user as the target poetry from the plurality of candidate poetry.
7. The method of claim 1, further comprising:
and displaying the image to be processed and the target poetry.
8. The method of any of claims 1-7, wherein the image recognition model is trained by:
acquiring a sample training set, wherein the sample training set comprises training images and training keywords corresponding to the training images;
and training a preset training model according to the sample training set to obtain the image recognition model.
9. The method of claim 8, wherein the training a preset training model according to the sample training set to obtain the image recognition model comprises:
and performing model training by taking the training images and the training keywords as training samples to obtain the image recognition model.
10. A poetry generating apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire an image to be processed input through a terminal;
the recognition module is configured to obtain a target keyword corresponding to a target object in the image to be processed through a pre-trained image recognition model according to the image to be processed;
the determining module is configured to determine target poetry corresponding to the target key words from a preset poetry set.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1-9.
12. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
CN202010591279.0A 2020-06-24 2020-06-24 Poetry generation method and device, electronic equipment and storage medium Pending CN111797262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010591279.0A CN111797262A (en) 2020-06-24 2020-06-24 Poetry generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010591279.0A CN111797262A (en) 2020-06-24 2020-06-24 Poetry generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111797262A true CN111797262A (en) 2020-10-20

Family

ID=72803688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010591279.0A Pending CN111797262A (en) 2020-06-24 2020-06-24 Poetry generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111797262A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434145A (en) * 2020-11-25 2021-03-02 天津大学 Picture-viewing poetry method based on image recognition and natural language processing
CN113794915A (en) * 2021-09-13 2021-12-14 海信电子科技(武汉)有限公司 Server, display equipment, poetry and song endowing generation method and media asset playing method
CN115115822A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226547A (en) * 2013-04-28 2013-07-31 百度在线网络技术(北京)有限公司 Method and device for producing verse for picture
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN107122492A (en) * 2017-05-19 2017-09-01 北京金山安全软件有限公司 Lyric generation method and device based on picture content
CN107590491A (en) * 2016-07-07 2018-01-16 阿里巴巴集团控股有限公司 A kind of image processing method and device
WO2018086470A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Keyword extraction method and device, and server
JP2018151854A (en) * 2017-03-13 2018-09-27 富士ゼロックス株式会社 Document processing device and program
CN108874779A (en) * 2018-06-21 2018-11-23 东北大学 The control method that system is write the poem according to figure established based on K8s cluster
CN109643332A (en) * 2016-12-26 2019-04-16 华为技术有限公司 A kind of sentence recommended method and device
CN109766013A (en) * 2018-12-28 2019-05-17 北京金山安全软件有限公司 Poetry sentence input recommendation method and device and electronic equipment
CN109784165A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Generation method, device, terminal and the storage medium of poem content
CN110991175A (en) * 2019-12-10 2020-04-10 爱驰汽车有限公司 Text generation method, system, device and storage medium under multiple modes
CN111275110A (en) * 2020-01-20 2020-06-12 北京百度网讯科技有限公司 Image description method and device, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226547A (en) * 2013-04-28 2013-07-31 百度在线网络技术(北京)有限公司 Method and device for producing verse for picture
CN107590491A (en) * 2016-07-07 2018-01-16 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
WO2018086470A1 (en) * 2016-11-10 2018-05-17 腾讯科技(深圳)有限公司 Keyword extraction method and device, and server
CN109643332A (en) * 2016-12-26 2019-04-16 华为技术有限公司 A kind of sentence recommended method and device
JP2018151854A (en) * 2017-03-13 2018-09-27 富士ゼロックス株式会社 Document processing device and program
CN107122492A (en) * 2017-05-19 2017-09-01 北京金山安全软件有限公司 Lyric generation method and device based on picture content
CN108874779A (en) * 2018-06-21 2018-11-23 东北大学 The control method that system is write the poem according to figure established based on K8s cluster
CN109784165A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Generation method, device, terminal and the storage medium of poem content
CN109766013A (en) * 2018-12-28 2019-05-17 北京金山安全软件有限公司 Poetry sentence input recommendation method and device and electronic equipment
CN110991175A (en) * 2019-12-10 2020-04-10 爱驰汽车有限公司 Text generation method, system, device and storage medium under multiple modes
CN111275110A (en) * 2020-01-20 2020-06-12 北京百度网讯科技有限公司 Image description method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何立健;林穗;翁海瑞: "基于LSTM的图像生成诗歌模型", 《信息技术与网络安全》, vol. 38, no. 4, 10 April 2019 (2019-04-10), pages 76 - 83 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434145A (en) * 2020-11-25 2021-03-02 天津大学 Picture-viewing poetry method based on image recognition and natural language processing
CN113794915A (en) * 2021-09-13 2021-12-14 海信电子科技(武汉)有限公司 Server, display equipment, poetry and song endowing generation method and media asset playing method
CN115115822A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115115822B (en) * 2022-06-30 2023-10-31 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip

Similar Documents

Publication Publication Date Title
CN109858524B (en) Gesture recognition method and device, electronic equipment and storage medium
WO2019141042A1 (en) Image classification method, device, and terminal
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
US9959487B2 (en) Method and device for adding font
CN110874145A (en) Input method and device and electronic equipment
CN110941966A (en) Training method, device and system of machine translation model
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN110781813B (en) Image recognition method and device, electronic equipment and storage medium
CN111539443A (en) Image recognition model training method and device and storage medium
US11335348B2 (en) Input method, device, apparatus, and storage medium
CN106331328B (en) Information prompting method and device
CN113920293A (en) Information identification method and device, electronic equipment and storage medium
CN107943317B (en) Input method and device
CN107179837B (en) Input method and device
CN110650364B (en) Video attitude tag extraction method and video-based interaction method
CN112631435A (en) Input method, device, equipment and storage medium
CN114466204B (en) Video bullet screen display method and device, electronic equipment and storage medium
CN113923517B (en) Background music generation method and device and electronic equipment
CN114550691A (en) Multi-tone word disambiguation method and device, electronic equipment and readable storage medium
CN114579702A (en) Message sending method, device, terminal and storage medium for preventing misoperation
CN112036247A (en) Expression package character generation method and device and storage medium
CN113128181A (en) Information processing method and device
CN111178086A (en) Data processing method, apparatus and medium
CN112363631A (en) Input method, input device and input device
CN113127613B (en) Chat information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination