CN116579317A - Method and system for automatically generating publications based on AI content - Google Patents

Method and system for automatically generating publications based on AI content Download PDF

Info

Publication number
CN116579317A
CN116579317A CN202310853986.6A CN202310853986A CN116579317A CN 116579317 A CN116579317 A CN 116579317A CN 202310853986 A CN202310853986 A CN 202310853986A CN 116579317 A CN116579317 A CN 116579317A
Authority
CN
China
Prior art keywords
content
keywords
publication
generating
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310853986.6A
Other languages
Chinese (zh)
Other versions
CN116579317B (en
Inventor
韩阳
付鹏
罗庚
齐书稳
王殿武
周彦彬
张文超
潘恒
张珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Citic United Cloud Technology Co ltd
Original Assignee
Citic United Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Citic United Cloud Technology Co ltd filed Critical Citic United Cloud Technology Co ltd
Priority to CN202310853986.6A priority Critical patent/CN116579317B/en
Publication of CN116579317A publication Critical patent/CN116579317A/en
Application granted granted Critical
Publication of CN116579317B publication Critical patent/CN116579317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method and a system for automatically generating publications based on AI content, which belongs to the field of artificial intelligence, and comprises the following steps: inputting data to be processed into a preprocessing content model to generate publication content; matching publication plates according to the contents, extracting keywords preset based on the plates, and generating auxiliary contents according to the keywords; generating a publication pre-sample from the publication content and the auxiliary content; and imaging the pre-sample, and modifying based on the input pattern keywords to generate a publication. According to the intelligent typesetting method, the intelligent AI is adopted for typesetting in the whole process, and the AI is adopted for typesetting beautification, so that the finished product ornamental value of the intelligent typesetting is effectively improved.

Description

Method and system for automatically generating publications based on AI content
Technical Field
The application relates to the field of artificial intelligence, in particular to a method for automatically generating publications based on AI content. And also relates to a system for automatically generating publications based on AI content.
Background
Many typesetting works of larger publishing houses in China are already carried out by outsourcing typesetting companies rather than by personnel in the publishing houses.
At present, the typesetting is mainly manual typesetting and intelligent typesetting. Along with the development of science and technology, rapid intelligent typesetting gradually appears in the field of scientific typesetting, the formats of the publications are relatively standard, the content is more careful, and the effects of beautifying and decorating typesetting and the like are less careful.
Therefore, the finished product of intelligent typesetting in the prior art is insufficient in ornamental value.
Disclosure of Invention
The application aims to solve the problem of insufficient ornamental value of the intelligent typesetting finished product in the prior art, and provides a method for automatically generating publications based on AI content. And also relates to a system for automatically generating publications based on AI content.
The application also provides a method for automatically generating the publication based on the AI content, which comprises the following steps:
inputting data to be processed into a preprocessing content model to generate publication content;
matching publication plates according to the contents, extracting keywords preset based on the plates, and generating auxiliary contents according to the keywords;
generating a publication pre-sample from the publication content and the auxiliary content;
and imaging the pre-sample, and modifying based on the input pattern keywords to generate a publication.
Optionally, the preprocessing model includes:
a voice conversion module, or an image recognition module.
Optionally, the obtaining auxiliary content includes: ranking based on the scores, the expression is as follows:
wherein the saidIs the score of the evaluation, a is the word count total score, B is the relationship total score, and C is the score. The number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships;
and determining content keywords based on the ranking.
Optionally, the publication is pre-sampled, including image data and/or text data.
Optionally, the generating auxiliary content according to the keyword includes:
an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
The application also provides a system for automatically generating publications based on AI content, comprising:
the processing module is used for inputting the data to be processed into the preprocessing content model to generate publication content;
the generation module is used for matching publication plates according to the content, extracting keywords preset based on the plates and generating auxiliary content according to the keywords;
the pre-sampling module is used for generating a publication pre-sample according to the publication content and the auxiliary content;
and the publishing module is used for imaging the pre-sample and generating a publication after modifying the pre-sample based on the input pattern keywords.
Optionally, the preprocessing model includes:
a voice conversion module, or an image recognition module.
Optionally, the generating module obtains the auxiliary content includes: ranking based on the scores, the expression is as follows:
wherein the saidIs the score of the evaluation, a is the word count total score, B is the relationship total score, and C is the score. The number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships;
and determining content keywords based on the ranking.
Optionally, the publication is pre-sampled, including image data and/or text data.
Optionally, the pre-sampling module generates auxiliary content according to the keyword, including:
an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
The application has the advantages and beneficial effects that:
at application point 1, the publication is automatically generated.
And 2, modifying based on the input pattern keywords to generate a publication.
And 3, generating auxiliary content by the keywords.
The application provides a method for automatically generating publications based on AI content, which comprises the following steps: inputting data to be processed into a preprocessing content model to generate publication content; matching publication plates according to the contents, extracting keywords preset based on the plates, and generating auxiliary contents according to the keywords; generating a publication pre-sample from the publication content and the auxiliary content; and imaging the pre-sample, and modifying based on the input pattern keywords to generate a publication. According to the intelligent typesetting method, the intelligent AI is adopted for typesetting in the whole process, and the AI is adopted for typesetting beautification, so that the finished product ornamental value of the intelligent typesetting is effectively improved.
Drawings
FIG. 1 is a schematic flow chart of the automatic generation of publications based on AI content in the application.
FIG. 2 is a schematic diagram of the sorting and screening of generic templates in the present application.
FIG. 3 is a schematic diagram of the logic sequence performed in the present application.
Fig. 4 is a schematic diagram of a system for automatically generating publications based on AI content in the present application.
Detailed Description
The present application is further described in conjunction with the accompanying drawings and specific embodiments so that those skilled in the art may better understand the present application and practice it.
The application provides a method for automatically generating publications based on AI content, which comprises the following steps: inputting data to be processed into a preprocessing content model to generate publication content; matching publication plates according to the contents, extracting keywords preset based on the plates, and generating auxiliary contents according to the keywords; generating a publication pre-sample from the publication content and the auxiliary content; and imaging the pre-sample, and modifying based on the input pattern keywords to generate a publication. According to the intelligent typesetting method, the intelligent AI is adopted for typesetting in the whole process, and the AI is adopted for typesetting beautification, so that the finished product ornamental value of the intelligent typesetting is effectively improved.
FIG. 1 is a schematic flow chart of the automatic generation of publications based on AI content in the application.
Referring to S101 shown in fig. 1 and 3, inputting data to be processed into a preprocessing content model to generate publication content;
the data to be processed comprises: and (5) voice recognition and image-text recognition.
The essence of speech recognition is a pattern recognition based on speech feature parameters, i.e. through learning, the system can classify the input speech according to a certain pattern, and then find out the best matching result according to the decision criteria.
The input speech is first pre-processed, including framing, windowing, pre-emphasis, etc.
And then extracting the characteristics, and selecting characteristic parameters, including: pitch period, formants, short-time average energy or amplitude, linear Prediction Coefficients (LPC), perceptual weighting prediction coefficients (PLP), short-time average zero-crossing rate, linear Prediction Cepstrum Coefficients (LPCC), autocorrelation functions, mel cepstrum coefficients (MFCC), wavelet transform coefficients, empirical mode decomposition coefficients (EMD), gamma-pass filter coefficients (GFCC), and the like.
And finally, extracting the characteristics based on the selected characteristic parameters, generating a template for the test voice according to the training process, and finally, identifying according to a distortion judgment criterion. Common distortion decision criteria include euclidean distance, covariance matrix and bayesian distance.
The image-text recognition refers to character recognition through OCR software.
Through the above processing, the publication contents are obtained, and the content association is also required in the publication contents, because the voice recognition and the image-text recognition can only extract the text contents in the input data, and therefore, the text contents and the picture contents are also required to be associated.
Finally, combining the text content, the image content and the association of the text and the image into the publication content.
Referring to fig. 1, S102 matches a publication plate according to the content, extracts a keyword preset based on the plate, and generates auxiliary content according to the keyword.
The publication board comprises typesetting, framing and other contents, and means that the typesetting and the device are generated according to the publication contents.
Specifically, the typesetting and framing are preset and stored universal templates, and related keywords are also arranged corresponding to the universal templates.
And when typesetting and frame loading are carried out, content statistics is required to be carried out on the content of the publications, sorting and screening of universal templates are carried out based on the content statistics results, and further processing is carried out on the screened results to obtain auxiliary content.
FIG. 2 is a schematic diagram of the sorting and screening of generic templates in the present application.
Referring to fig. 2, S201 first performs content keyword extraction on the content.
These content keywords may be extracted by pre-review extraction specifications, such as extracting only nouns, extracting only nouns after verbs, and so on.
And after the content keywords are extracted, sorting the keywords, for example, sorting the keywords according to the number of the keywords, or sorting the keywords based on the positions of the keywords. Preferably, the ranking of keywords based on location can be performed by using a scoring method:
setting a position weight, for example, evaluating paragraphs to obtain paragraph weights. The paragraph weights are as follows:
wherein the saidIs the score of the evaluation, a is the word count total score, B is the relationship total score, and C is the score. The number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships.
The values A, B and C are preset values greater than 1, and can also be values greater than 10. And then sorting based on the scores, including:
when sorting is performed, firstly, score comparison is performed, namely, the current sorting position is:
wherein j is a new sort sequence number.
Referring to fig. 2, S202 determines content keywords based on the ranking.
And determining the content keywords, namely selecting the first few digits in the reordered keyword sequence to obtain the final content keywords.
Referring to fig. 2, S203 performs ranking and screening of universal templates based on the content keywords.
Specifically, the keywords are associated with the universal templates to obtain keywords for matching, the matched universal templates are ranked in matching degree, and the universal template with the highest matching degree is selected.
Referring to fig. 2, S204 generates auxiliary content based on the universal template.
Comprising the following steps: an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
Specifically, the addition of the content in the template format, such as noun addition, time, address and other information and various images, is performed based on the universal template. The determination may be performed by a worker according to actual situations, and will not be described in detail herein.
Please refer to S103 in fig. 1, which generates a publication pre-sample according to the publication content and the auxiliary content.
Specifically, based on the universal template, the publication content and the auxiliary content, a pre-sample of the publication is generated, including addition of all information of the publication, addition of an image according to an association relation between the publication content and the image, adaptation of the image according to the universal template, modification of a format and the like.
Specifically, the universal template is provided with different plates based on the content of the publication, and the different plates are added based on the content of the publication and the auxiliary content of the publication to obtain the final publication.
The pre-sampling is composed of one or more text files in a text format, and is ordered and stored based on a preset plate sequence of the universal template.
And finally outputting the pre-sample, manually adjusting and further normalizing to generate the pre-sample to be processed.
Please refer to S104 in fig. 1, which is to image the pre-sample and modify the pre-sample based on the inputted pattern keyword to generate a publication.
And imaging the pre-sample, namely converting the text file into an image file, and carrying out final finished product output of final decoration based on the image file.
Specifically, after the image file is output, the content thereof cannot be modified, so that the finished product needs to be classified first. Specifically, publications generated based on the universal templates can be divided into two broad categories, namely, modifiable portions and non-modifiable portions. The modifiable portion is a portion that is less associated with the content and the non-modifiable portion is a portion that is more associated. Specifically, when the selection is performed, the determination is performed according to the amount of the publication contents input in the universal template.
Based on the modifiable portion, image recognition is first performed to identify non-modifiable content, such as text, in the modifiable portion. Identifying and extracting the non-modifiable content and the modifiable content, and storing the non-modifiable content.
Extracting the modifiable content, and calculating the relevance of the modifiable content and the non-modifiable content, wherein the expression is as follows:
wherein l is the correlation used for comparison, x is the horizontal axis coordinate of the center point of the non-modifiable content, y is the vertical axis coordinate of the center point of the non-modifiable content,is the coordinates on a plurality of horizontal axes of modifiable content, said +.>Is the coordinates of points on the multi-and vertical axes of the modifiable content.
Setting a threshold value, and when the l is smaller than the threshold value, modifying the modifiable content by taking the non-modifiable content as a pattern keyword.
Specifically, the modifiable content can be scratched out, added into an image generation network as an initial image, input into the content keywords for modification and output, and the method comprises the steps of generating an image based on one or more keywords and generating a description according to the one or more keywords.
And finally, placing the output image to a home position, and placing the non-modifiable content to the home position.
Finally, the publication is obtained.
The application also provides a system for automatically generating publications based on AI content, comprising: a processing module 301, a generating module 302, a pre-sampling module 303, and a publishing module 304.
Referring to fig. 3, a processing module 301 is shown for inputting data to be processed into a preprocessing content model to generate publication content.
The data to be processed comprises: and (5) voice recognition and image-text recognition.
The essence of speech recognition is a pattern recognition based on speech feature parameters, i.e. through learning, the system can classify the input speech according to a certain pattern, and then find out the best matching result according to the decision criteria.
The input speech is first pre-processed, including framing, windowing, pre-emphasis, etc.
And then extracting the characteristics, and selecting characteristic parameters, including: pitch period, formants, short-time average energy or amplitude, linear Prediction Coefficients (LPC), perceptual weighting prediction coefficients (PLP), short-time average zero-crossing rate, linear Prediction Cepstrum Coefficients (LPCC), autocorrelation functions, mel cepstrum coefficients (MFCC), wavelet transform coefficients, empirical mode decomposition coefficients (EMD), gamma-pass filter coefficients (GFCC), and the like.
And finally, extracting the characteristics based on the selected characteristic parameters, generating a template for the test voice according to the training process, and finally, identifying according to a distortion judgment criterion. Common distortion decision criteria include euclidean distance, covariance matrix and bayesian distance.
The image-text recognition refers to character recognition through OCR software.
Through the above processing, the publication contents are obtained, and the content association is also required in the publication contents, because the voice recognition and the image-text recognition can only extract the text contents in the input data, and therefore, the text contents and the picture contents are also required to be associated.
Finally, combining the text content, the image content and the association of the text and the image into the publication content.
Referring to fig. 1, a generating module 302 is shown, which is configured to match a publication board according to the content, extract a keyword preset based on the board, and generate auxiliary content according to the keyword.
The publication board comprises typesetting, framing and other contents, and means that the typesetting and the device are generated according to the publication contents.
Specifically, the typesetting and framing are preset and stored universal templates, and related keywords are also arranged corresponding to the universal templates.
And when typesetting and frame loading are carried out, content statistics is required to be carried out on the content of the publications, sorting and screening of universal templates are carried out based on the content statistics results, and further processing is carried out on the screened results to obtain auxiliary content.
FIG. 2 is a schematic diagram of the sorting and screening of generic templates in the present application.
Referring to fig. 2, S201 first performs content keyword extraction on the content.
These content keywords may be extracted by pre-review extraction specifications, such as extracting only nouns, extracting only nouns after verbs, and so on.
And after the content keywords are extracted, sorting the keywords, for example, sorting the keywords according to the number of the keywords, or sorting the keywords based on the positions of the keywords. Preferably, the ranking of keywords based on location can be performed by using a scoring method:
setting a position weight, for example, evaluating paragraphs to obtain paragraph weights. The paragraph weights are as follows:
wherein the saidIs the score of the evaluation, a is the word count total score, B is the relationship total score, and C is the score. The number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships.
The values A, B and C are preset values greater than 1, and can also be values greater than 10. And then sorting based on the scores, including:
when sorting is performed, firstly, score comparison is performed, namely, the current sorting position is:
wherein j is a new sort sequence number.
Referring to fig. 2, S202 determines content keywords based on the ranking.
And determining the content keywords, namely selecting the first few digits in the reordered keyword sequence to obtain the final content keywords.
Referring to fig. 2, S203 performs ranking and screening of universal templates based on the content keywords.
Specifically, the keywords are associated with the universal templates to obtain keywords for matching, the matched universal templates are ranked in matching degree, and the universal template with the highest matching degree is selected.
Referring to fig. 2, S204 generates auxiliary content based on the universal template.
Comprising the following steps: an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
Specifically, the addition of the content in the template format, such as noun addition, time, address and other information and various images, is performed based on the universal template. The determination may be performed by a worker according to actual situations, and will not be described in detail herein.
Please refer to the pre-sampling module 303 shown in fig. 1, which is configured to generate a publication pre-sample according to the publication content and the auxiliary content.
Specifically, based on the universal template, the publication content and the auxiliary content, a pre-sample of the publication is generated, including addition of all information of the publication, addition of an image according to an association relation between the publication content and the image, adaptation of the image according to the universal template, modification of a format and the like.
Specifically, the universal template is provided with different plates based on the content of the publication, and the different plates are added based on the content of the publication and the auxiliary content of the publication to obtain the final publication.
The pre-sampling is composed of one or more text files in a text format, and is ordered and stored based on a preset plate sequence of the universal template.
And finally outputting the pre-sample, manually adjusting and further normalizing to generate the pre-sample to be processed.
Referring to the publishing module 304 shown in fig. 1, the pre-sample is imaged, and a publication is generated after modification based on the input pattern keyword.
And imaging the pre-sample, namely converting the text file into an image file, and carrying out final finished product output of final decoration based on the image file.
Specifically, after the image file is output, the content thereof cannot be modified, so that the finished product needs to be classified first. Specifically, publications generated based on the universal templates can be divided into two broad categories, namely, modifiable portions and non-modifiable portions. The modifiable portion is a portion that is less associated with the content and the non-modifiable portion is a portion that is more associated. Specifically, when the selection is performed, the determination is performed according to the amount of the publication contents input in the universal template.
Based on the modifiable portion, image recognition is first performed to identify non-modifiable content, such as text, in the modifiable portion. Identifying and extracting the non-modifiable content and the modifiable content, and storing the non-modifiable content.
Extracting the modifiable content, and calculating the relevance of the modifiable content and the non-modifiable content, wherein the expression is as follows:
wherein l is the correlation used for comparison, x is the horizontal axis coordinate of the center point of the non-modifiable content, y is the vertical axis coordinate of the center point of the non-modifiable content,is the coordinates on a plurality of horizontal axes of modifiable content, said +.>Is the coordinates of points on the multi-and vertical axes of the modifiable content.
Setting a threshold value, and when the l is smaller than the threshold value, modifying the modifiable content by taking the non-modifiable content as a pattern keyword.
Specifically, the modifiable content can be scratched out, added into an image generation network as an initial image, input into the content keywords for modification and output, and the method comprises the steps of generating an image based on one or more keywords and generating a description according to the one or more keywords.
And finally, placing the output image to a home position, and placing the non-modifiable content to the home position.
Finally, the publication is obtained.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for automatically generating a publication based on AI content, comprising:
inputting data to be processed into a preprocessing content model to generate publication content;
matching publication plates according to the contents, extracting keywords preset based on the plates, and generating auxiliary contents according to the keywords;
generating a publication pre-sample from the publication content and the auxiliary content;
and imaging the pre-sample, and modifying based on the input pattern keywords to generate a publication.
2. The method for automatically generating publications based on AI content of claim 1, wherein said pre-processing model comprises:
a voice conversion module, or an image recognition module.
3. The method for automatically generating publications based on AI content of claim 1, wherein said obtaining auxiliary content comprises: ranking based on the scores, the expression is as follows:
wherein the saidThe score is the evaluation score, wherein A is the total score of word numbers, B is the total score of relations, and C is the score; the number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships;
and determining content keywords based on the ranking.
4. The method for automatically generating a publication based on AI content of claim 1, wherein said publication is pre-sampled, including image data and/or text data.
5. The method for automatically generating publications based on AI content of claim 1, wherein said generating auxiliary content from said keywords comprises:
an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
6. A system for automatically generating a publication based on AI content as set forth in any one of claims 1-5, and including:
the processing module is used for inputting the data to be processed into the preprocessing content model to generate publication content;
the generation module is used for matching publication plates according to the content, extracting keywords preset based on the plates and generating auxiliary content according to the keywords;
the pre-sampling module is used for generating a publication pre-sample according to the publication content and the auxiliary content;
and the publishing module is used for imaging the pre-sample and generating a publication after modifying the pre-sample based on the input pattern keywords.
7. The system of claim 6, wherein the preprocessing model comprises:
a voice conversion module, or an image recognition module.
8. The system of claim 6, wherein the generating module obtains auxiliary content comprises: ranking based on the scores, the expression is as follows:
wherein the saidThe score is the evaluation score, wherein A is the total score of word numbers, B is the total score of relations, and C is the score; the number of the keywords is H, the number of the keywords is i, the number of the relationships is G, and the number of the relationships is +.>Is the total keyword number, said +.>Is the total number of relationships;
and determining content keywords based on the ranking.
9. The system of claim 6, wherein the publication pre-sample comprises image data and/or text data.
10. The system of claim 6, wherein the pre-sampling module generates auxiliary content from the keywords, comprising:
an image is generated based on the one or more keywords, and a description is generated based on the one or more keywords.
CN202310853986.6A 2023-07-13 2023-07-13 Method and system for automatically generating publications based on AI content Active CN116579317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310853986.6A CN116579317B (en) 2023-07-13 2023-07-13 Method and system for automatically generating publications based on AI content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310853986.6A CN116579317B (en) 2023-07-13 2023-07-13 Method and system for automatically generating publications based on AI content

Publications (2)

Publication Number Publication Date
CN116579317A true CN116579317A (en) 2023-08-11
CN116579317B CN116579317B (en) 2023-10-13

Family

ID=87538200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310853986.6A Active CN116579317B (en) 2023-07-13 2023-07-13 Method and system for automatically generating publications based on AI content

Country Status (1)

Country Link
CN (1) CN116579317B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123269A (en) * 2014-07-16 2014-10-29 华中科技大学 Semi-automatic publication generation method and system based on template
CN105260359A (en) * 2015-10-16 2016-01-20 晶赞广告(上海)有限公司 Semantic keyword extraction method and apparatus
CN111881307A (en) * 2020-07-28 2020-11-03 平安科技(深圳)有限公司 Demonstration manuscript generation method and device, computer equipment and storage medium
US20210081719A1 (en) * 2019-09-13 2021-03-18 Oracle International Corporation Scalable architecture for automatic generation of content distribution images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123269A (en) * 2014-07-16 2014-10-29 华中科技大学 Semi-automatic publication generation method and system based on template
CN105260359A (en) * 2015-10-16 2016-01-20 晶赞广告(上海)有限公司 Semantic keyword extraction method and apparatus
US20210081719A1 (en) * 2019-09-13 2021-03-18 Oracle International Corporation Scalable architecture for automatic generation of content distribution images
CN111881307A (en) * 2020-07-28 2020-11-03 平安科技(深圳)有限公司 Demonstration manuscript generation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN116579317B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US10515292B2 (en) Joint acoustic and visual processing
CN110111775A (en) A kind of Streaming voice recognition methods, device, equipment and storage medium
CN111445898B (en) Language identification method and device, electronic equipment and storage medium
KR100904049B1 (en) System and Method for Classifying Named Entities from Speech Recongnition
CN103971675A (en) Automatic voice recognizing method and system
CN111916108B (en) Voice evaluation method and device
CN101082836A (en) Chinese characters input system integrating voice input and hand-written input function
CN1216380A (en) Feature extraction apparatus and method and pattern recognition apparatus and method
WO2016119604A1 (en) Voice information search method and apparatus, and server
CN109033212B (en) Text classification method based on similarity matching
CN1160450A (en) System for recognizing spoken sounds from continuous speech and method of using same
CN113920986A (en) Conference record generation method, device, equipment and storage medium
CN116206496B (en) Oral english practice analysis compares system based on artificial intelligence
CN113744722A (en) Off-line speech recognition matching device and method for limited sentence library
CN113450757A (en) Speech synthesis method, speech synthesis device, electronic equipment and computer-readable storage medium
CN114742047A (en) Text emotion recognition method based on maximum probability filling and multi-head attention mechanism
CN111091809A (en) Regional accent recognition method and device based on depth feature fusion
CN110852075A (en) Voice transcription method and device for automatically adding punctuation marks and readable storage medium
CN1150852A (en) Speech-recognition system utilizing neural networks and method of using same
CN116579317B (en) Method and system for automatically generating publications based on AI content
CN109841216A (en) Processing method, device and the intelligent terminal of voice data
CN112951237B (en) Automatic voice recognition method and system based on artificial intelligence
CN114822557A (en) Method, device, equipment and storage medium for distinguishing different sounds in classroom
CN113409763B (en) Voice correction method and device and electronic equipment
CN112668664B (en) Intelligent voice-based conversational training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Han Yang

Inventor after: Fu Peng

Inventor after: Luo Geng

Inventor after: Qi Shuwen

Inventor after: Wang Dianwu

Inventor after: Zhou Yanbin

Inventor after: Zhang Wenchao

Inventor after: Pan Heng

Inventor after: Zhang Ke

Inventor before: Han Yang

Inventor before: Fu Peng

Inventor before: Luo Geng

Inventor before: Qi Shuwen

Inventor before: Wang Dianwu

Inventor before: Zhou Yanbin

Inventor before: Zhang Wenchao

Inventor before: Pan Heng

Inventor before: Zhang Ke