CN115963954A - Information publishing method, device, equipment and medium - Google Patents

Information publishing method, device, equipment and medium Download PDF

Info

Publication number
CN115963954A
CN115963954A CN202310239106.6A CN202310239106A CN115963954A CN 115963954 A CN115963954 A CN 115963954A CN 202310239106 A CN202310239106 A CN 202310239106A CN 115963954 A CN115963954 A CN 115963954A
Authority
CN
China
Prior art keywords
information
image
segmentation result
document
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310239106.6A
Other languages
Chinese (zh)
Inventor
刘晨曦
胡芳
连玉超
赵建华
张立春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Intelligent Media Technology Co ltd
Original Assignee
Beijing Zhongke Intelligent Media Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Intelligent Media Technology Co ltd filed Critical Beijing Zhongke Intelligent Media Technology Co ltd
Priority to CN202310239106.6A priority Critical patent/CN115963954A/en
Publication of CN115963954A publication Critical patent/CN115963954A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device, equipment and a medium for information release, wherein the method comprises the following steps: responding to an information issuing instruction issued by a user, and displaying an information editing interface on a graphical user interface; responding to an input operation issued by a user in the content editing control, and displaying content information generated according to the input operation in the content editing control to prompt the user that the input operation is finished; responding to a release instruction issued by a user, carrying out universal sensitivity detection on the content information, and prompting the user to modify sensitivity information obtained by the universal sensitivity detection so as to generate a releasable document; according to a content editing model corresponding to a publishing area of a publishable document, intelligently editing the publishable document for the first time to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification; and publishing the document to be published.

Description

Information publishing method, device, equipment and medium
Technical Field
The present application relates to the field of information processing, and in particular, to a method, an apparatus, a device, and a medium for information distribution.
Background
Information distribution refers to the dissemination of information through a distribution technology. In the field of information distribution, information distribution through a network platform is a basic information distribution mode. From early radio and television releases to the internet at present, information release through a network platform is a main source for people to obtain information and is also a main source for releasing information.
Early information distribution was primarily text. With the social development and the technical progress, the content of information distribution can also have audio, video, links and the like, and certain fields can also have richer distribution modes. By issuing information, the information can be transmitted in one direction, and can interact with other users (such as microblog).
The traditional information publishing technology usually has only a single publishing function, namely, the information is published directly after being input by a user.
Disclosure of Invention
In view of the above, an object of the present application is to provide an information publishing method, apparatus, device and medium, which are used to solve the problems that document content published in the prior art is not controllable, and the content is found to be inappropriate after publication.
In a first aspect, an embodiment of the present application provides an information publishing method, including:
responding to an information issuing instruction issued by a user, and displaying an information editing interface on a graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
responding to an input operation issued by a user in the content editing control, and displaying content information generated according to the input operation in the content editing control to prompt the user that the input operation is finished;
responding to a release instruction issued by a user, carrying out universal sensitivity detection on the content information, and prompting the user to modify sensitivity information obtained by the universal sensitivity detection so as to generate a releasable document;
according to a content editing model corresponding to a publishing area of a publishable document, intelligently editing the publishable document for the first time to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
and publishing the document to be published.
Optionally, according to the content editing model corresponding to the publishing area of the publishable document, performing first-time intelligent editing on the publishable document to generate a document to be published for the publishing area, where the document to be published includes any one or more of the following:
the first method comprises the following steps: identifying target text information in the publishable document by using a content editing model corresponding to the publishing region to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region to generate a document to be published;
and the second method comprises the following steps: identifying the image in the publishable document by using an image editing model corresponding to the publishing area so as to determine a replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, and replacing the replaceable image object by using the target image object to generate the document to be published;
and the third is that: extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and fourthly: and verifying the sensitivity of the network address corresponding to the link information, and clearing the network address with the sensitivity exceeding a preset value.
Optionally, identifying an image in the publishable document by using an image editing model corresponding to the publishing area to determine the replaceable image object, including:
carrying out image mesh segmentation on an original image in the distributable document for three times according to a first mesh size, a second mesh size and a third mesh size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
respectively identifying sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image to determine whether a target sub-image with similarity exceeding a preset value with a preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result or not; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
respectively counting the occupation ratios of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result;
if the proportion of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result is less than a preset value, determining that the document to be issued can be directly generated based on the original image;
if the proportion of the target sub-images in at least one of the first segmentation result, the second segmentation result and the third segmentation result is larger than a preset value, foreground extraction is carried out on the original image, and sensitivity identification is carried out on the target foreground image obtained by foreground extraction by using an image editing model corresponding to the release area; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
Optionally, selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, replacing the replaceable image object with the target image object, and generating the document to be published, where the method includes:
generating text abstract information based on the text information of the document to be issued;
identifying the image in the document to be issued to generate image abstract information in a character form;
determining semantic information of the abstract of the document to be issued based on the text abstract information and the image abstract information;
selecting a target image object which has relevance with the semantic information of the abstract from all candidate objects in a database based on a knowledge graph where the replaceable image object is generated in advance;
and replacing the replaceable image object by using the target image object to generate the document to be published.
Optionally, selecting, as the target image object, an object having a correlation with the semantic information of the summary from all candidate objects in the database based on a pre-generated knowledge-graph where the replaceable image object is located, where the target image object includes:
determining a first weight value of each candidate object according to the number of indirect connections between each candidate object and the replaceable image object in the knowledge graph where the replaceable image object is located;
determining a second weight value of each candidate object based on the semantic information of the abstract and the description information of each candidate object;
determining a final weight value of each candidate object based on the first weight value and the second weight value;
displaying all candidate objects with final weight values larger than a preset numerical value in a graphical user interface in a knowledge graph mode; the background color of each candidate object is determined according to the final weight value;
and taking the target candidate object as a target image object in response to the selection operation of the user on the target candidate object in all the candidate objects.
In a second aspect, an embodiment of the present application provides an apparatus for information distribution, including:
the display module is used for responding to an information issuing instruction issued by a user and displaying an information editing interface on the graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
the input module is used for responding to input operation issued by a user in the content editing control and displaying content information generated according to the input operation in the content editing control so as to prompt the user that the input operation is finished;
the generating module is used for responding to a publishing instruction issued by a user, carrying out general sensitivity detection on the content information and prompting the user to modify sensitivity information obtained by the general sensitivity detection so as to generate a publishable document;
the editing module is used for intelligently editing the publishable document for the first time according to the content editing model corresponding to the publishing area of the publishable document so as to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
and the publishing module is used for publishing the document to be published.
Optionally, the editing module includes:
the first editing unit is used for identifying target text information in the publishable document by using a content editing model corresponding to the publishing region so as to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region so as to generate a document to be published;
the second editing unit is used for identifying the image in the publishable document by using the image editing model corresponding to the publishing area so as to determine the replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, and replacing the replaceable image object by using the target image object to generate the document to be published;
the third editing unit is used for extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and the fourth editing unit is used for verifying the sensitivity of the network address corresponding to the link information and clearing the network address of which the sensitivity exceeds a preset value.
Optionally, the second editing unit includes:
the segmentation subunit is used for performing image grid segmentation on an original image in the publishable document for three times according to a first grid size, a second grid size and a third grid size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
the first determining subunit is used for respectively identifying the sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image so as to determine whether a target sub-image with similarity exceeding a preset numerical value with the preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result or not; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
the statistics subunit is used for respectively counting the occupation ratios of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result;
the second determining subunit is used for determining that the document to be issued can be directly generated based on the original image if the proportion of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result is less than a preset numerical value;
a third determining subunit, configured to perform foreground extraction on the original image if the proportion of the target sub-image in the at least one first segmentation result, the second segmentation result, and the third segmentation result is greater than a predetermined value, and perform sensitivity identification on the target foreground image obtained by foreground extraction by using the image editing model corresponding to the publishing area; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, performs the steps of the above method.
The method for issuing the information comprises the steps of firstly responding to an information issuing instruction issued by a user and displaying an information editing interface on a graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information; secondly, responding to the input operation issued by the user in the content editing control, and displaying the content information generated according to the input operation in the content editing control to prompt the user that the input operation is finished; then, responding to a release instruction issued by a user, carrying out universal sensitivity detection on the content information, and prompting the user to modify sensitivity information obtained by the universal sensitivity detection so as to generate a releasable document; secondly, intelligently editing the publishable document for the first time according to a content editing model corresponding to the publishing area of the publishable document to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification; and finally, publishing the document to be published.
In some embodiments, the platform that supports the publishing task can automatically and intelligently edit the published content, so that the published document content is controlled, and the situation that the content is found to be inappropriate after being published and then responsibility is pursued is avoided.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a method for publishing information according to an embodiment of the present application;
fig. 2 is a schematic view of a first information editing interface provided in an embodiment of the present application;
fig. 3 is a schematic view of a second information editing interface provided in an embodiment of the present application;
fig. 4 is a schematic view of a third information editing interface provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of a method for processing an image according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an information distribution apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of the present application provides an information publishing method, as shown in fig. 1, including:
s101, responding to an information issuing instruction issued by a user, and displaying an information editing interface on a graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
s102, responding to an input operation issued by a user in a content editing control, and displaying content information generated according to the input operation in the content editing control to prompt the user that the input operation is finished;
s103, responding to a release instruction issued by a user, carrying out universal sensitivity detection on the content information, and prompting the user to modify sensitivity information obtained by the universal sensitivity detection so as to generate a releasable document;
s104, intelligently editing the publishable document for the first time according to the content editing model corresponding to the publishing area of the publishable document to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
and S105, publishing the document to be published.
The execution main body of the scheme is a system consisting of an intelligent terminal operated by a user and a server of a service platform, so that each step in the scheme can be executed by the server or the terminal in principle.
In step S101, the information issue instruction is usually an operation of the user to start software or a mobile phone APP, and for example, the operation may be that the user clicks an icon in the mobile phone APP to control the mobile phone to open a corresponding information issue APP, or clicks a certain function control in an already deployed APP program.
The title editing control and the content editing control in the information editing interface are two independent controls, and the main reason for setting is that the auditing strategy of the title and the auditing strategy of the content are different, so that the control input through the two independent controls is more accurate.
There are two types of link information, namely a network link and a terminal link, where the network link usually points to a web page/website, or points to an address in a wide area network; a terminal connection usually points to a link in a terminal, such as an address of a file in an intelligent terminal operated by a user, through which the terminal connection can be directly called to a file stored in the terminal, or it points to an address in a local area network.
In step S102, the user may perform input operations in the content editing control in many ways, such as a direct input way, a copy way, and the like. After the user inputs, the content which is input completely is displayed in the content editing control, text information can be directly displayed, original images can be directly displayed for image information, thumbnails of the images can also be displayed, the scaling factor of the thumbnails can be determined according to the proportion of the content editing control in a graphical user interface, the overall requirement is that the thumbnails occupy 60% of the content editing control intelligently at the maximum, otherwise, if the space occupied by the images is too large, other input information cannot be seen.
The video information may directly display a key frame image or a first frame image of the video, or may directly display text summary information of the video, where the text summary information of the video may be introductory texts and thumbnail icons corresponding to the video, or text information extracted from the video by using a mathematical model (for example, text information describing images in the video). The link information may be directly put into the address information of the link, or may show the profile of the object in the network address corresponding to the link, such as the website profile, the profile of the link content, etc.
In step S103, after the user completes the input, the user may manually issue an issue instruction, and the intelligent terminal or the server performs the general sensitivity detection on the content information. The manually issued issuing instruction may be that the user touches a control indicating that the input is completed, or that the user does not update the content information for a long period of time, and the user is considered to have issued the issuing instruction.
After detecting that the user has reached the issue instruction, the method can perform general sensitivity detection on the content information, and prompt the user to modify the sensitivity information, thereby generating the issuable document. The sensitive information is of various types, such as content obviously irrelevant to the input content, for example, a certificate call appears in a landscape image, and content with satellite technology equal to that of the food without strong relevance appears in words for introducing the food. When the sensitivity information is prompted, the sensitivity information may be highlighted, for example, highlighted or circled, so as to prompt the user that the currently input content has the sensitivity information.
In step S104, the first intelligent editing may be automatically performed on the publishable document based on the content editing model corresponding to the publishing area of the publishable document, unlike step S103, the intelligent editing performed in step S104 is more automated and performed in combination with the publishing area, and the general sensitivity detection in step S103 is not specific to a specific area.
The text content conversion can be deleting and replacing the text content; the image processing may be to clean up sensitive content (or content with errors) by means of PS; the video processing can be deleting some frame images or processing by means of PS; the link modification has two ways, namely deleting the link and replacing the link, wherein the link replacement means replacing the old link with a new link which is the same or similar to the actual content (such as the website content pointed by the network link).
The content editing model corresponding to the distribution area in step S104 is a model trained in combination with the characteristics of the distribution area.
In step S105, the document to be published after the intelligent editing is performed is published.
According to the scheme, the published content can be automatically intelligently edited by the platform accepting the publishing task, the published document content is controllable, and the situation that the published content is found to be inappropriate after being published and then responsibility tracing is carried out is avoided.
As shown in fig. 2, specific contents of the information editing interface are shown, and text information, image information, video information, and link information can be directly input in a dialog box in the middle, and these information are all displayed in the same control after being input, so as to represent correspondence of the contents. The left side of the information editing interface shown in fig. 2 shows new, draft, and other functional controls, and by operating these controls, the user can enter the corresponding interface for further operation. The right side of the information editing interface shows some controls for text detection and picture detection, and only functional controls corresponding to the two detections appear because only text information and image information are input in the current interface. Specifically, as the currently input content, the text detection is passed, but the picture detection is problematic, specifically, because the image has sensitive information that does not conform to the picture. If the user does not process the detected sensitive information, the content editing model may further automatically process the sensitive information.
For example, fig. 3 shows an interface for adjusting the content editing model, in which three controls that can open the text detection model, the picture detection model, and the video detection model are shown, and a user can touch the three controls, and then the interface enters the interface for adjusting the model content to update the model content in time.
As shown in fig. 4, thumbnail icons of the aforementioned videos are shown.
Specifically, the method comprises the following steps of intelligently editing the publishable document for the first time according to a content editing model corresponding to the publishing area of the publishable document to generate a document to be published for the publishing area, wherein the method comprises any one or more of the following modes:
the first method comprises the following steps: identifying target text information in the publishable document by using a content editing model corresponding to the publishing region to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region to generate a document to be published;
and the second method comprises the following steps: identifying the image in the publishable document by using an image editing model corresponding to the publishing area so as to determine the replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published so as to replace the replaceable image object by using the target image object and generate the document to be published;
and the third is that: extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and fourthly: and verifying the sensitivity of the network address corresponding to the link information, and clearing the network address with the sensitivity exceeding a preset value.
The first way is to process the textual information and the replaceable textual object refers to sensitive new or faulty information that may be problematic. For example, the punctuation with errors, the wrong text is used. That is, the distributed contents should not cause a feeling of disagreement to the population in the distribution area to ensure the reading amount of the distributed contents.
The second way is to process the image, and the processing procedure may be to adjust the sensitive image, because the user usually does not have the capability of PS in step S103, so the whole replacement of the image is intelligently selected, but in the second implementation way of step S104, the replacement of the partial image may be performed by the image editing model, and it is ensured that the other contents except the replaceable image object are the original contents.
The third mode is video processing, and specifically, there are two modes for video processing, namely processing video frames and processing sound. The processing of the video frames may refer to the processing of the images in the second manner. The processing of the sound may be to determine the content of the utterance (based on a speech recognition technology) and determine the identity of the speaker based on the phoneme information, and then if the content of the utterance is sensitive or the identity of the speaker is sensitive, corresponding processing should be performed, such as deleting the content with a corresponding duration or performing operations such as changing voice and replacing.
The fourth way is to verify the sensitivity of the network address corresponding to the link information, for example, if there is sensitive content in the website corresponding to the link information, the link information should be deleted, and further, new link information can be found according to the information that the user desires to publish to replace the original link information. Such as new link information may be found from other content (text information, image information, video information) entered by the user.
Further, as shown in fig. 5, a second way of processing the image may be composed of the following steps:
s201, carrying out image grid segmentation on an original image in a distributable document for three times according to a first grid size, a second grid size and a third grid size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
s202, respectively identifying sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image to determine whether a target sub-image with similarity exceeding a preset numerical value with a preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
s203, respectively counting the proportion of the target subimages in the first segmentation result, the second segmentation result and the third segmentation result;
s204, if the ratio of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result is less than a preset value, determining that the document to be issued can be directly generated based on the original image;
s205, if the proportion of the target sub-images in at least one of the first segmentation result, the second segmentation result and the third segmentation result is larger than a preset value, foreground extraction is carried out on the original image, and sensitivity identification is carried out on the target foreground image obtained by foreground extraction by using an image editing model corresponding to a release area; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
In step S201, meshes with different sizes are used to perform mesh segmentation on the original image, that is, three times of mesh segmentation are performed on the original image, where there is no association between any two times of segmentation, and the results of the three times of segmentation are independent. That is, all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be stitched into the original image. Specifically, the first grid size is greater than twice the second grid size; the second mesh size is larger than twice the third mesh size, whereby it can be ensured that the accuracy of the cubic mesh segmentation is different.
In step S202, the sensitive image (the image in the sensitive image library corresponding to the distribution area) is used to compare the similarity of each sub-image in the cubic segmentation result, and the purpose of the comparison is to find out that the similarity of one or more sub-images and the sensitive image is too high, and the sub-image with the too high similarity is the target sub-image. Specifically, there are two types of sensitive images, namely, a sensitive image of a distribution area and a sensitive image determined according to the personnel type of the personnel in the distribution area. The first type of sensitive image is directly related to the region, such as the boundary of the region boundary, content that is not suitable for presentation in some locations (e.g., content that is not suitable for presentation in a classroom and is not relevant to learning), etc. The second is determined by the person, e.g. in a student in a classroom, who is not suitable for something other than a book. These contents can be directly determined based on big data and then can be pre-stored in a database (recorded in the form of a knowledge map or a mind map).
In step S203, the occupation ratios of the target sub-images can be respectively calculated, which actually reflects the space of the target sub-images in the original image. If the occupation ratio is too small, the target foreground image can be properly ignored or not be the key point of the expected expression of the image, and then the step S204 can be executed, if the occupation ratio of the target sub-image in a certain result is too large, further analysis is needed, namely, in the step S205, the foreground image is extracted (the extraction precision of the part is high, enough foreground images are extracted), and then the target foreground image obtained by foreground extraction is subjected to sensitivity identification by utilizing an image editing model corresponding to the release area; and if the sensitivity of the target foreground image is too high, determining the target foreground image as the replaceable image object. The steps S201 to S202 mainly use the sensitive image to directly compare the image similarity, and the step S205 uses the image editing model to process, which is mainly because the image editing model is the last guarantee, so the model itself is very large, and occupies a large computational power during operation, so the image comparison method is firstly used to process, and then the model identification method is used. Meanwhile, the model itself is not available to every terminal, so generally speaking, steps S201-S202 are performed by a personal intelligent terminal (e.g. a computer) used by the user, and the sensitivity identification using the model in step S205 is performed by the server.
Specifically, the step of selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published to replace the replaceable image object with the target image object to generate the document to be published includes:
generating text abstract information based on the text information of the document to be issued;
identifying the image in the document to be issued to generate image abstract information in a character form;
determining semantic information of the abstract of the document to be issued based on the text abstract information and the image abstract information;
selecting a target image object which has relevance with the semantic information of the abstract from all candidate objects in a database based on a knowledge graph where the replaceable image object is generated in advance;
and replacing the replaceable image object by using the target image object to generate the document to be published.
The text abstract information can be automatically generated by using a preset mathematical model. Text excerpts, which we will generally divide into decimated text excerpts and generated text excerpts. In the current situation, the industry is still widely using the abstract text. The extraction type text abstract has the advantages of being not easy to deviate in theme, wide in adaptability and high in speed. The most traditional scheme, the abstract text abstract, is undoubtedly the Lead3 algorithm. The image abstract information is also similar and can be automatically generated by using a mathematical model, and the two abstract information respectively express the core idea of the document to be published from the perspective of the text content and the image content, so that the semantic information of the abstract of the document to be published can be generated based on the two abstract information (the abstract information can be generated in a direct combination mode, and can also be secondarily refined based on the two abstract information by using the mathematical model). Then, based on the knowledge-graph where the replaceable image object is located, a target image object having a correlation with the semantic information of the abstract can be selected from all candidate objects in the database, and this process mainly selects the target image object based on the degree of direct correlation (which may also be regarded as the degree of indirect correlation, which reflects how many objects are passed to connect two entities together, or the degree of direct correlation is the shortest path between two entities) between different entities in the knowledge-graph (usually, an entity whose shortest path is smaller than a preset value is used as the target image object).
The target image object can then be used to replace the alternative image object to generate the document to be published.
Specifically, the step of selecting, from all candidate objects in the database, the target image object having a correlation with the semantic information of the abstract based on the knowledge-graph in which the replaceable image object is pre-generated may be implemented as follows:
determining a first weight value of each candidate object according to the number of indirect connections between each candidate object and the replaceable image object in the knowledge graph where the replaceable image object is located;
determining a second weight value of each candidate object based on the semantic information of the abstract and the description information of each candidate object;
determining a final weight value of each candidate object based on the first weight value and the second weight value;
displaying all candidate objects with final weight values larger than a preset numerical value in a graphical user interface in a knowledge graph mode; the background color of each candidate object is determined according to the size of the final weight value;
and responding to the selection operation of the user for the target candidate object in all the candidate objects, and taking the target candidate object as the target image object.
That is, a first weight value is determined according to the number of indirect connections (e.g., the length of the shortest path between two entities, or how many entities or relationships two entities can generate a connection relationship), and then a second weight value of each candidate object is determined according to the correlation/similarity between semantic information of the digest of the document to be published and description information (usually pre-entered) of each candidate object. And then, a final weight value can be determined by using the first weight value and the second weight value in a direct addition mode.
Then, all the candidate objects (or the candidate objects with the final weight value larger than the preset value) can be displayed in the graphical user interface, so that the user can make a selection and give the selection to the user to determine the final target image object. In order to improve efficiency and accuracy, the background color of each candidate object is determined according to the size of the final weight value, that is, the larger the final weight value is, the more conspicuous the background color should be, so that the user can quickly capture the background color.
Based on the method for information publishing, an embodiment of the present application provides an apparatus for information publishing, as shown in fig. 6, including:
the display module 601 is configured to respond to an information issuing instruction issued by a user and display an information editing interface on a graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
the input module 602 is configured to respond to an input operation issued by a user in the content editing control, and display content information generated according to the input operation in the content editing control to prompt the user that the input operation is completed;
a generating module 603, configured to respond to a publishing instruction issued by a user, perform general sensitivity detection on the content information, and prompt the user to modify sensitivity information obtained by the general sensitivity detection, so as to generate a publishable document;
the editing module 604 is configured to perform first intelligent editing on the publishable document according to a content editing model corresponding to a publishing area of the publishable document, so as to generate a document to be published for the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
the publishing module 605 is configured to publish the document to be published.
Optionally, the editing module includes:
the first editing unit is used for identifying target text information in the publishable document by using a content editing model corresponding to the publishing region so as to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region so as to generate a document to be published;
the second editing unit is used for identifying the image in the publishable document by using the image editing model corresponding to the publishing area so as to determine the replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, and replacing the replaceable image object by using the target image object to generate the document to be published;
the third editing unit is used for extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and the fourth editing unit is used for verifying the sensitivity of the network address corresponding to the link information and clearing the network address of which the sensitivity exceeds a preset value.
Optionally, the second editing unit includes:
the segmentation subunit is used for performing image grid segmentation on the original image in the distributable document for three times according to the first grid size, the second grid size and the third grid size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
the first determining subunit is used for respectively identifying the sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image so as to determine whether a target sub-image with similarity exceeding a preset numerical value with preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result or not; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
the statistics subunit is used for respectively counting the occupation ratios of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result;
the second determining subunit is used for determining that the document to be issued can be directly generated based on the original image if the proportion of the target sub-image in the first segmentation result, the second segmentation result and the third segmentation result is smaller than a preset numerical value;
the third determining subunit is used for performing foreground extraction on the original image and performing sensitivity identification on a target foreground image obtained by foreground extraction by using an image editing model corresponding to the release area if the proportion of the target sub-image in at least one of the first segmentation result, the second segmentation result and the third segmentation result is greater than a preset numerical value; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
Optionally, the second editing unit includes:
the generating subunit is used for generating text abstract information based on the text information of the document to be issued;
the identification subunit is used for identifying the image in the document to be issued so as to generate the image abstract information in the form of characters;
the abstract sub-unit is used for determining semantic information of the abstract of the document to be issued based on the text abstract information and the image abstract information;
the association subunit is used for selecting a target image object which has relevance with the semantic information of the abstract from all candidate objects in a database based on a knowledge graph in which the replaceable image object is generated in advance;
and the publishing subunit is used for replacing the replaceable image object by using the target image object to generate a document to be published.
Optionally, the association subunit includes:
the fourth determining subunit is used for determining a first weight value of each candidate object according to the number of indirect connections between each candidate object and the replaceable image object in the knowledge graph where the replaceable image object is located;
a fifth determining subunit, configured to determine a second weight value of each candidate object based on the semantic information of the digest and the description information of each candidate object;
a sixth determining subunit operable to determine a final weight value of each candidate object based on the first weight value and the second weight value;
the seventh determining subunit is used for displaying all candidate objects with final weight values larger than a preset numerical value in a graphical user interface in a knowledge graph mode; the background color of each candidate object is determined according to the final weight value;
and the eighth determining subunit is used for responding to the selection operation of the user on the target candidate object in all the candidate objects and taking the target candidate object as the target image object.
Corresponding to the method of information distribution in fig. 1, an embodiment of the present application further provides a computer device 700, as shown in fig. 7, the device includes a memory 701, a processor 702, and a computer program stored in the memory 701 and executable on the processor 702, where the processor 702 implements the method of information distribution when executing the computer program.
Specifically, the memory 701 and the processor 702 can be general memories and processors, which are not limited in particular, and when the processor 702 runs a computer program stored in the memory 701, the information publishing method can be executed, so that the problems that the content of a document published in the prior art is not controllable, and the content is found to be inappropriate after publication are solved.
Corresponding to the method of information distribution in fig. 1, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method of information distribution.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk and the like, when a computer program on the storage medium is run, the information publishing method can be executed, the problems that the content of a document published in the prior art is uncontrollable and the content is found to be inappropriate after publication are solved, the published content can be automatically intelligently edited by a platform which is connected with a publishing task, the published content of the document is controllable, and the situation that the content is found to be inappropriate after publication and then responsibility tracing is avoided.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for information distribution, comprising:
responding to an information issuing instruction issued by a user, and displaying an information editing interface on a graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
responding to an input operation issued by a user in a content editing control, and displaying content information generated according to the input operation in the content editing control to prompt the user that the input operation is finished;
responding to a release instruction issued by a user, carrying out universal sensitivity detection on the content information, and prompting the user to modify sensitivity information obtained by the universal sensitivity detection so as to generate a releasable document;
according to a content editing model corresponding to a publishing area of a publishable document, intelligently editing the publishable document for the first time to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
and publishing the document to be published.
2. The method according to claim 1, wherein the publishable document is intelligently edited for the first time according to a content editing model corresponding to a publishing area of the publishable document to generate a document to be published for the publishing area, which includes any one or more of the following:
the first method comprises the following steps: identifying target text information in the publishable document by using a content editing model corresponding to the publishing region to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region to generate a document to be published;
and the second method comprises the following steps: identifying the image in the publishable document by using an image editing model corresponding to the publishing area so as to determine the replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, and replacing the replaceable image object by using the target image object to generate the document to be published;
and the third is that: extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and fourthly: and verifying the sensitivity of the network address corresponding to the link information, and clearing the network address with the sensitivity exceeding a preset value.
3. The method of claim 2, wherein identifying the image in the publishable document using an image-editing model corresponding to the publishing area to determine the alternative image object comprises:
carrying out image mesh segmentation on an original image in the distributable document for three times according to a first mesh size, a second mesh size and a third mesh size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
respectively identifying sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image to determine whether a target sub-image with similarity exceeding a preset value with a preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result or not; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
respectively counting the occupation ratios of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result;
if the proportion of the target sub-image in the first segmentation result, the second segmentation result and the third segmentation result is smaller than a preset value, determining that the document to be issued can be directly generated based on the original image;
if the ratio of the target sub-images in at least one of the first segmentation result, the second segmentation result and the third segmentation result is greater than a preset value, performing foreground extraction on the original image, and performing sensitivity identification on the target foreground image obtained by foreground extraction by using an image editing model corresponding to the release area; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
4. The method of claim 2, wherein selecting a target image object from candidate objects associated with the alternative image objects based on semantic information of the summary of the document to be published, replacing the alternative image objects with the target image object, and generating the document to be published comprises:
generating text abstract information based on the text information of the document to be issued;
identifying the image in the document to be published to generate image abstract information in a character form;
determining semantic information of the abstract of the document to be issued based on the text abstract information and the image abstract information;
selecting a target image object which has relevance with the semantic information of the abstract from all candidate objects in a database based on a knowledge graph where the replaceable image object is generated in advance;
and replacing the replaceable image object by using the target image object to generate the document to be published.
5. The method of claim 4, wherein selecting the target image object having an association with the semantic information of the abstract from all candidate objects in the database based on the knowledge-graph of the pre-generated replaceable image objects comprises:
determining a first weight value of each candidate object according to the number of indirect connections between each candidate object and the replaceable image object in the knowledge graph where the replaceable image object is located;
determining a second weight value of each candidate object based on the semantic information of the abstract and the description information of each candidate object;
determining a final weight value of each candidate object based on the first weight value and the second weight value;
displaying all candidate objects with final weight values larger than a preset value in a graphical user interface in a mode of a knowledge graph; the background color of each candidate object is determined according to the size of the final weight value;
and taking the target candidate object as a target image object in response to the selection operation of the user on the target candidate object in all the candidate objects.
6. An apparatus for information distribution, comprising:
the display module is used for responding to an information issuing instruction issued by a user and displaying an information editing interface on the graphical user interface; the information editing interface comprises a title editing control and a content editing control; the content editing control is used for receiving content information input by a user; the content information includes any one or more of: text information, image information, video information, and link information;
the input module is used for responding to input operation issued by a user in the content editing control and displaying content information generated according to the input operation in the content editing control so as to prompt the user that the input operation is finished;
the generating module is used for responding to a publishing instruction issued by a user, carrying out general sensitivity detection on the content information and prompting the user to modify sensitivity information obtained by the general sensitivity detection so as to generate a publishable document;
the editing module is used for intelligently editing the publishable document for the first time according to the content editing model corresponding to the publishing area of the publishable document so as to generate a document to be published aiming at the publishing area; the first intelligent editing comprises one or more of character content conversion, image processing, video processing and link modification;
and the publishing module is used for publishing the document to be published.
7. The apparatus of claim 6, wherein the editing module comprises:
the first editing unit is used for identifying target text information in the publishable document by using a content editing model corresponding to the publishing region so as to determine a replaceable text object in the publishable document, and replacing the replaceable text object by using a dialect of the publishing region by using the content editing model corresponding to the publishing region so as to generate a document to be published;
the second editing unit is used for identifying the image in the publishable document by using the image editing model corresponding to the publishing area so as to determine the replaceable image object; selecting a target image object from candidate objects associated with the replaceable image object based on semantic information of the abstract of the document to be published, and replacing the replaceable image object by using the target image object to generate the document to be published;
the third editing unit is used for extracting phoneme information in the video information, determining a sensitive voice fragment based on the phoneme information, and cutting the video information based on the sensitive voice fragment;
and the fourth editing unit is used for verifying the sensitivity of the network address corresponding to the link information and clearing the network address of which the sensitivity exceeds a preset value.
8. The apparatus according to claim 7, wherein the second editing unit comprises:
the segmentation subunit is used for performing image grid segmentation on the original image in the distributable document for three times according to the first grid size, the second grid size and the third grid size respectively to generate a first segmentation result, a second segmentation result and a third segmentation result; all sub-images in the first segmentation result can be spliced into an original image; all sub-images in the second segmentation result can be spliced into an original image; all sub-images in the third segmentation result can be spliced into an original image; the first grid size is greater than twice the second grid size; the second grid size is greater than twice the third grid size;
the first determining subunit is used for respectively identifying the sub-images in the first segmentation result, the second segmentation result and the third segmentation result by using a preset sensitive image so as to determine whether a target sub-image with similarity exceeding a preset numerical value with the preset sensitivity exists in the first segmentation result, the second segmentation result and the third segmentation result or not; presetting the sensitive image as any one or more of the following: sensitive images of the release area and sensitive images determined according to the personnel types of personnel in the release area;
the statistical subunit is used for respectively counting the occupation ratios of the target subimages in the first segmentation result, the second segmentation result and the third segmentation result;
the second determining subunit is used for determining that the document to be issued can be directly generated based on the original image if the proportion of the target sub-images in the first segmentation result, the second segmentation result and the third segmentation result is less than a preset numerical value;
a third determining subunit, configured to perform foreground extraction on the original image if the proportion of the target sub-image in the at least one first segmentation result, the second segmentation result, and the third segmentation result is greater than a predetermined value, and perform sensitivity identification on the target foreground image obtained by foreground extraction by using the image editing model corresponding to the publishing area; and if the sensitivity of the target foreground image is too high, determining that the target foreground image is a replaceable image object.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of the preceding claims 1-5 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 5.
CN202310239106.6A 2023-03-14 2023-03-14 Information publishing method, device, equipment and medium Pending CN115963954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310239106.6A CN115963954A (en) 2023-03-14 2023-03-14 Information publishing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310239106.6A CN115963954A (en) 2023-03-14 2023-03-14 Information publishing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115963954A true CN115963954A (en) 2023-04-14

Family

ID=87361756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310239106.6A Pending CN115963954A (en) 2023-03-14 2023-03-14 Information publishing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115963954A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183761A (en) * 2015-07-27 2015-12-23 网易传媒科技(北京)有限公司 Sensitive word replacement method and apparatus
CN110297927A (en) * 2019-05-17 2019-10-01 百度在线网络技术(北京)有限公司 Article dissemination method, device, equipment and storage medium
CN114943005A (en) * 2022-05-18 2022-08-26 中国建设银行股份有限公司 Picture display processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183761A (en) * 2015-07-27 2015-12-23 网易传媒科技(北京)有限公司 Sensitive word replacement method and apparatus
CN110297927A (en) * 2019-05-17 2019-10-01 百度在线网络技术(北京)有限公司 Article dissemination method, device, equipment and storage medium
CN114943005A (en) * 2022-05-18 2022-08-26 中国建设银行股份有限公司 Picture display processing method and device

Similar Documents

Publication Publication Date Title
US8819545B2 (en) Digital comic editor, method and non-transitory computer-readable medium
JP4215792B2 (en) CONFERENCE SUPPORT DEVICE, CONFERENCE SUPPORT METHOD, AND CONFERENCE SUPPORT PROGRAM
CN106971749A (en) Audio-frequency processing method and electronic equipment
CN114254158B (en) Video generation method and device, and neural network training method and device
US10089898B2 (en) Information processing device, control method therefor, and computer program
CN113360619A (en) Form generation method, device, equipment and medium
KR102353797B1 (en) Method and system for suppoting content editing based on real time generation of synthesized sound for video content
CN111723816A (en) Teaching note acquisition method and electronic equipment
JP6868576B2 (en) Event presentation system and event presentation device
CN117436414A (en) Presentation generation method and device, electronic equipment and storage medium
CN113268593A (en) Intention classification and model training method and device, terminal and storage medium
CN113204723A (en) Page background matching method and device based on page theme
CN110297965B (en) Courseware page display and page set construction method, device, equipment and medium
CN111935552A (en) Information labeling method, device, equipment and medium
CN116611401A (en) Document generation method and related device, electronic equipment and storage medium
CN115963954A (en) Information publishing method, device, equipment and medium
CN107908792B (en) Information pushing method and device
JP2005107931A (en) Image search apparatus
CN115860829A (en) Intelligent advertisement image generation method and device
CN115186070A (en) Information processing method, apparatus, device, storage medium, and program product
US20200388076A1 (en) Method and system for generating augmented reality interactive content
CN111966267A (en) Application comment method and device and electronic equipment
WO2023073886A1 (en) Information processing system, information processing device, information processing method, and recording medium
JP2008191879A (en) Information display device, display method for information display device, information display program, and recording medium with information display program recorded
CN111368099B (en) Method and device for generating core information semantic graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230414

RJ01 Rejection of invention patent application after publication