CN115034177A - Presentation file conversion method, device, equipment and storage medium - Google Patents

Presentation file conversion method, device, equipment and storage medium Download PDF

Info

Publication number
CN115034177A
CN115034177A CN202210687460.0A CN202210687460A CN115034177A CN 115034177 A CN115034177 A CN 115034177A CN 202210687460 A CN202210687460 A CN 202210687460A CN 115034177 A CN115034177 A CN 115034177A
Authority
CN
China
Prior art keywords
information
text
presentation
page
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210687460.0A
Other languages
Chinese (zh)
Inventor
满园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202210687460.0A priority Critical patent/CN115034177A/en
Publication of CN115034177A publication Critical patent/CN115034177A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • G06V30/19013Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a presentation conversion method, a presentation conversion device, presentation conversion equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring a presentation to be converted, a target document style label and a plurality of first template pages; carrying out style recognition processing on each first template page, and determining a second template page which accords with the style label of the target manuscript; performing image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and performing image recognition processing on each second template page to obtain second object information of each second template page; matching the first object information and the second object information, and determining a target page from each second template page; and generating a target presentation according to the first object information and the target page. The embodiment of the application can improve the conversion efficiency of the presentation and reduce the time cost.

Description

Presentation file conversion method, device, equipment and storage medium
Technical Field
The present application relates to, but not limited to, the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for converting a presentation.
Background
With the widespread popularization of office software, presentations are widely used in various aspects of social life, for example, in the fields of work reports, enterprise promotions, product recommendations, wedding celebrations, project bidding, management consultations, educational training, and the like. The application field of the presentation is increasingly wide, and people have more and more requirements on making slides.
At present, when a user uses the same presentation in different occasions, the user needs to firstly convert the style of the presentation so as to ensure that the style of the presentation meets the requirements of the current occasions, but each content page of the presentation needs to be converted, a large amount of time is needed, the conversion efficiency of the presentation is low, and the time cost is increased.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides a presentation conversion method, a presentation conversion device, presentation conversion equipment and a storage medium, which can improve the conversion efficiency of a presentation and reduce the time cost.
In order to achieve the above object, a first aspect of an embodiment of the present application provides a presentation transformation method, where the method includes: acquiring a presentation to be converted, a target document style label and a plurality of first template pages; performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with the target manuscript style tag, and determining a second template page from a plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag; performing image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and performing image recognition processing on each second template page to obtain second object information of each second template page; matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information; and generating a target presentation according to the first object information and the target page.
In some embodiments, the first object information includes a number of first text attribute information, a number of first image attribute information, and first layout type information, and the second object information includes a number of second text attribute information, a number of second image attribute information, and second layout type information; the image recognition processing of the presentation to be converted to obtain first object information of the presentation to be converted, and the image recognition processing of each second template page to obtain second object information of each second template page includes: performing optical character recognition on the presentation to be converted to obtain the first text attribute information, and performing optical character recognition on each second template page to obtain the second text attribute information; performing image recognition on the presentation to be converted to obtain the first image attribute information, and performing image recognition on each second template page to obtain the second image attribute information; and determining the first typesetting type information according to the first text attribute information and the first image attribute information, and determining the second typesetting type information according to the second text attribute information and the second image attribute information.
In some embodiments, the first text attribute information comprises first text position information, first text content information, a first font size value, and a first font value, the first image attribute information comprises first picture position information, the second text attribute information comprises second text position information, second text content information, a second font size value, and a second font value, and the second image attribute information comprises second picture position information; the determining the first layout type information according to the first text attribute information and the first image attribute information, and determining the second layout type information according to the second text attribute information and the second image attribute information includes: performing semantic recognition processing on the first text content information to obtain first semantic information; determining the text type of each first text attribute information according to the first text position information, the first semantic information, the first word size value and the first word numerical value; determining the first layout type information according to the text type of the first text attribute information, the first text position information and the first picture position information; performing semantic recognition processing on the second text content information to obtain second semantic information; determining the text type of each second text attribute information according to the second text position information, the second semantic information, the second word size value and the second word numerical value; and determining the second typesetting type information according to the text type of the second text attribute information, the second text position information and the second picture position information.
In some embodiments, the text type includes at least one of: a main title type, a subtitle type, and a text type; the first layout type information includes at least one of: graphics-text type information, contrast type information, clause type information, and other types of information; the second typesetting type information comprises image-text type information, contrast type information and clause type information; for the presentation to be converted, the first font size values corresponding to the main title type, the subtitle type and the text type are sequentially reduced, the image-text type information means that the first image attribute information includes one piece of first image position information, the contrast type information means that the first image attribute information includes two pieces of first image position information, the bar style information means that the first template page includes three or more pieces of first text attribute information belonging to the subtitle type, and the other types of information means type information different from the image-text type information, the contrast type information and the bar style information.
In some embodiments, the matching the first object information and the second object information, and determining a target page from each of the second template pages according to a matching result between the first object information and the second object information includes: comparing the first layout type information with the second layout type information, and determining similar template pages from the second template pages according to the comparison result of the first layout type information and the second layout type information; and matching the first object information and the second object information, and determining a target page from all the similar template pages according to a matching result between the first object information and the second object information.
In some embodiments, the matching the first object information and the second object information, and determining a target page from each of the homogeneous template pages according to a matching result between the first object information and the second object information includes: merging the presentation to be converted and each similar template page to obtain a merged page corresponding to each similar template page; for each merged page, determining a first matching value according to the first text position information and the second text position information, and determining a second matching value according to the first picture position information and the second picture position information; determining a matching result of each similar template page and the presentation to be converted according to the first matching value and the second matching value; and determining a target page from each similar template page according to the matching result of each similar template page and the presentation to be converted.
In some embodiments, before the step of comparing the first layout type information with the second layout type information and determining a similar template page from each second template page according to the comparison result between the first layout type information and the second layout type information, the method further includes: and when the first layout type information is the other types of information, changing the first layout type information into the image-text type information.
To achieve the above object, a second aspect of an embodiment of the present application proposes a presentation conversion apparatus, including: the acquisition unit is used for acquiring the presentation to be converted, the target document style labels and a plurality of first template pages; the analysis unit is used for performing style identification processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with the target manuscript style tag, and determining a second template page from a plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag; the identification unit is used for carrying out image identification processing on the presentation to be converted to obtain first object information of the presentation to be converted, and carrying out image identification processing on each second template page to obtain second object information of each second template page; a matching unit, configured to perform matching processing on the first object information and the second object information, and determine a target page from each second template page according to a matching result between the first object information and the second object information; and the generating unit is used for generating a target presentation according to the first object information and the target page.
In order to achieve the above object, a third aspect of the embodiments of the present application provides an electronic device, which includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, wherein the program, when executed by the processor, implements the presentation conversion method according to the first aspect.
In order to achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium for computer-readable storage, and stores one or more programs, which are executable by one or more processors to implement the presentation conversion method according to the first aspect.
The embodiment of the application provides a presentation conversion method, a presentation conversion device and a storage medium, and comprises the following steps: acquiring a presentation to be converted, a target document style label and a plurality of first template pages; performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with the target manuscript style tag, and determining a second template page from a plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag; performing image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and performing image recognition processing on each second template page to obtain second object information of each second template page; matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information; and generating a target presentation according to the first object information and the target page. According to the scheme provided by the embodiment of the application, the style identification processing is utilized to identify and obtain the first manuscript style label of each first template page, comparing the first manuscript style label with the target manuscript style label, determining a second template page meeting the manuscript style requirement according to the comparison result, then, by utilizing the image recognition processing, the first object information of the presentation to be converted is recognized and obtained, and the second object information of each second template page is recognized and obtained, then, the first object information and the second object information are matched, and a target page suitable for the presentation to be converted is determined according to a matching result, and then the target presentation is generated, so that the presentation to be converted is converted into the target presentation, the conversion efficiency of the presentation can be improved, and the time cost is reduced.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
Fig. 1 is a flowchart of a presentation conversion method according to an embodiment of the present application;
FIG. 2 is a flowchart of determining layout type information according to another embodiment of the present application;
FIG. 3 is a flowchart of another method for determining layout type information according to another embodiment of the present application;
FIG. 4 is a flow chart of a method for determining a destination page according to another embodiment of the present application;
FIG. 5 is a flow chart of another embodiment of the present application for determining a destination page;
FIG. 6 is a flowchart illustrating a method for modifying layout type information according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a presentation conversion apparatus according to another embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of an electronic device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the description of the present application, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and larger, smaller, larger, etc. are understood as excluding the present number, and larger, smaller, inner, etc. are understood as including the present number.
It is noted that while functional block divisions are provided in device diagrams and logical sequences are shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions within devices or flowcharts. The terms "first," "second," and the like in the description, in the claims, or in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
First, several terms referred to in the present application are resolved:
artificial Intelligence (AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
Optical Character Recognition (OCR) refers to a process in which an electronic device (e.g., a scanner or a digital camera) examines a Character printed on paper, determines its shape by detecting dark and light patterns, and then translates the shape into computer text using Character Recognition methods.
Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence; NLP studies enable various theories and methods for efficient communication between humans and computers using natural language.
At present, when a user uses the same presentation in different occasions, the user needs to firstly convert the style of the presentation so as to ensure that the style of the presentation meets the requirements of the current occasions, but each content page of the presentation needs to be converted, a large amount of time is needed, the conversion efficiency of the presentation is low, and the time cost is increased.
Aiming at the problems of low conversion efficiency of the presentation and increased time cost, the application provides a presentation conversion method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a presentation to be converted, a target document style label and a plurality of first template pages; performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with a target manuscript style tag, and determining a second template page from the plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag; carrying out image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and carrying out image recognition processing on each second template page to obtain second object information of each second template page; matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information; and generating a target presentation according to the first object information and the target page. According to the scheme provided by the embodiment of the application, the style identification processing is utilized to identify and obtain the first manuscript style label of each first template page, comparing the first manuscript style label with the target manuscript style label, determining a second template page meeting the manuscript style requirement according to the comparison result, then, by utilizing the image recognition processing, the first object information of the presentation to be converted is recognized and obtained, and the second object information of each second template page is recognized and obtained, then, the first object information and the second object information are matched, and a target page suitable for the presentation to be converted is determined according to a matching result, and then the target presentation is generated, so that the presentation to be converted is converted into the target presentation, the conversion efficiency of the presentation can be improved, and the time cost is reduced.
The presentation conversion method, the presentation conversion device, the presentation conversion apparatus, and the storage medium provided in the embodiments of the present application are specifically described in the following embodiments, and first, the presentation conversion method in the embodiments of the present application is described.
The embodiment of the application provides a presentation conversion method, and relates to the technical field of data processing. The presentation conversion method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, or the like; the server side can be configured as an independent physical server, can also be configured as a server cluster or a distributed system formed by a plurality of physical servers, and can also be configured as a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content distribution network) and big data and artificial intelligence platforms; the software may be an application or the like that implements the presentation conversion method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In each embodiment of the present application, when data related to the user identity or characteristic, such as user information, user behavior data, user history data, and user location information, is processed, permission or consent of the user is obtained, and the data collection, use, and processing comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire sensitive personal information of a user, individual permission or individual consent of the user is obtained through a pop-up window or a jump to a confirmation page, and after the individual permission or individual consent of the user is definitely obtained, necessary user-related data for enabling the embodiment of the present application to operate normally is acquired.
The embodiments of the present application will be further explained with reference to the drawings.
As shown in fig. 1, fig. 1 is a flowchart of a presentation conversion method according to an embodiment of the present application. The presentation transformation method includes, but is not limited to, the following steps:
step S110, acquiring a presentation to be converted, a target document style label and a plurality of first template pages;
step S120, performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with a target manuscript style tag, and determining a second template page from the plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag;
step S130, carrying out image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and carrying out image recognition processing on each second template page to obtain second object information of each second template page;
step S140, matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information;
and step S150, generating a target presentation according to the first object information and the target page.
It is understood that, a presentation to be converted and a target document style tag input by a user are obtained, the target document style tag is one of a plurality of preset document style tags, the document style tag is used for representing the document style of the presentation, and a plurality of first template pages stored in a template library are obtained, and the recognition objects of the style recognition process include but are not limited to: the size of a presentation, the text characteristics and the picture characteristics in the presentation can be effectively determined through style identification processing, the document style of each first template page can be effectively determined, the first document style labels of each first template page are obtained through combining preset document style labels, then the first document style labels identical to the target document style labels are screened out, further, the first module pages matched with the target document style labels are used as second template pages, all the second template pages meet the document style requirements, then, image identification processing is carried out on the presentation to be converted to obtain first object information, image identification processing is carried out on each second template page to obtain second object information, the first object information and the second object information refer to the sum of the characteristic information of each object in the presentation, and the sum of the characteristic information of each object can be used as the characteristic information of the presentation, objects include, but are not limited to: text and pictures; then determining second object information with the highest matching degree with the first object information, taking a second template page corresponding to the second object information as a target page, and further generating a target presentation through the first object information and the target page; because the target page is the second template page closest to the characteristic information of the presentation to be converted, the content of the presentation to be converted can be prevented from being greatly changed in the conversion process, the readability of the target presentation is increased, and the conversion quality of the presentation is improved; based on the method, the style identification processing is utilized to identify and obtain the first manuscript style label of each first template page, the first manuscript style label and the target manuscript style label are compared, the second template page meeting the manuscript style requirement is determined according to the comparison result, then the image identification processing is utilized to identify and obtain the first object information of the presentation to be converted and identify and obtain the second object information of each second template page, then the first object information and the second object information are matched, the target page suitable for the presentation to be converted is determined according to the matching result, and then the target presentation is generated.
It should be noted that, the target presentation is generated through the first object information and the target page, and specific steps include, but are not limited to: removing the text content of the text box in the target page and removing the picture in the target page, then adding the text content corresponding to the first object information into the corresponding text box in the target page, and setting the text style of the text content to be the text style in the initial state, wherein the text style includes but is not limited to: highlighting the font, the font size, the text color and the text; and then inserting the picture corresponding to the first object information to the position of the picture in the initial state of the target page, and adjusting the size of the picture corresponding to the first object information according to the size of the picture in the initial state of the target page.
The method comprises the steps that a style recognition model is utilized to perform style recognition processing, a presentation is input into the trained style recognition model, preprocessing is performed through a preprocessing layer of the style recognition model, and therefore the size of the presentation, the text characteristics and the picture characteristics in the presentation are determined; then, the style recognition model analyzes the size of the presentation, for example, the presentation style of the presentation is determined according to the aspect ratio of the presentation, the presentation with the aspect ratio larger than one is suitable for a computer end, and the presentation with the aspect ratio smaller than one is suitable for a mobile phone end; the style recognition model then analyzes the text features in the document, for example, the text features include but are not limited to: text position, text word count, and text font, including but not limited to: determining a text paragraph with the most text words, then determining the text font of the text paragraph, and determining the font style of the presentation through the text font; then, the style recognition model analyzes the picture characteristics in the manuscript and determines the picture style of the presentation manuscript, wherein the picture style comprises but is not limited to: science and technology sense, ink and wash, commerce and cartoon, picture characteristics include but are not limited to: picture position and picture size; and determining the typesetting style of the presentation according to the text position, the text word number, the picture position and the picture size, wherein the typesetting style comprises the following steps: the image-text type typesetting style, the contrast type typesetting style and the clause type typesetting style; and determining a document style label of the presentation document according to the display style, the font style, the picture style and the typesetting style of the presentation document.
The style recognition model is obtained by training a preset classification model using the document style labels and a plurality of demonstration training documents as training data.
The presentation is a PPT, a poster, or other design scheme with text attached to pictures, and is not limited herein.
In specific practice, by the presentation conversion method, the document style of the presentation to be converted can be converted according to template pages with different document styles so as to meet the requirements of users and display contents with different styles; the presentation conversion method can be applied to the fields of education, training, design and the like.
In addition, referring to fig. 2, in an embodiment, the first object information includes a number of first text attribute information, a number of first image attribute information, and first layout type information, and the second object information includes a number of second text attribute information, a number of second image attribute information, and second layout type information; step S130 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S210, carrying out optical character recognition on the presentation to be converted to obtain first text attribute information, and carrying out optical character recognition on each second template page to obtain second text attribute information;
step S220, carrying out image recognition on the presentation to be converted to obtain first image attribute information, and carrying out image recognition on each second template page to obtain second image attribute information;
step S230, determining first layout type information according to the first text attribute information and the first image attribute information, and determining second layout type information according to the second text attribute information and the second image attribute information.
It can be understood that the first text attribute information and the second text attribute information can be determined by optical character recognition; through image recognition, first image attribute information and second image attribute information can be determined; the first text attribute information and the second text attribute information each include, but are not limited to: text position information and text word number information; the first image attribute information and the second image attribute information each include, but are not limited to: picture position information and picture size information; through the first text attribute information and the first image attribute information, the type setting of the presentation to be converted can be determined; the type setting of the second template page can be determined through the second text attribute information and the second image attribute information; the first layout type information is used for representing the layout type of the presentation to be converted, and the second layout type information is used for representing the layout type of the second template page.
It should be noted that the method for performing optical character recognition by using the optical character recognition model, the method for performing image recognition by using the image recognition model, the method for performing optical character recognition by using the optical character recognition model, and the method for performing image recognition by using the image recognition model belong to the technologies known by those skilled in the art, and are not described herein again.
In addition, referring to fig. 3, in an embodiment, the first text attribute information includes first text position information, first text content information, a first font size value, and a first font value, the first image attribute information includes first picture position information, the second text attribute information includes second text position information, second text content information, a second font size value, and a second font value, and the second image attribute information includes second picture position information; step S230 in the embodiment shown in fig. 2 includes, but is not limited to, the following steps:
step S310, performing semantic recognition processing on the first text content information to obtain first semantic information;
step S320, determining the text type of each first text attribute information according to the first text position information, the first semantic information, the first word size value and the first word numerical value;
step S330, determining first layout type information according to the text type of the first text attribute information, the first text position information and the first picture position information;
step S340, carrying out semantic recognition processing on the second text content information to obtain second semantic information;
step S350, determining the text type of each second text attribute information according to the second text position information, the second semantic information, the second word size value and the second word numerical value;
step S360, determining second typesetting type information according to the text type of the second text attribute information, the second text position information and the second picture position information.
It can be understood that, through semantic recognition processing, first semantic information and second semantic information are obtained, the first semantic information is used for representing the semantic content of the first text content information, the second semantic information is used for representing the semantic content of the second text content information, the text type of the first text attribute information can be analyzed through the first text position information, the first semantic information, the first font size value and the first font numerical value, and further determines first layout type information in combination with the first picture position information, and in addition, the text type of the second text attribute information can be analyzed through the second text position information, the second semantic information, the second font size value and the second font numerical value, and then, the second typesetting type information is determined by combining the second picture position information, and the typesetting types of the presentation to be converted and the second template page can be accurately analyzed.
It should be noted that the semantic recognition processing belongs to natural language processing, and a method for performing semantic recognition through a trained semantic recognition model belongs to technologies well known to those skilled in the art, and is not described herein again.
Additionally, in one embodiment, the text type includes at least one of: a main title type, a subtitle type, and a text type;
the first layout type information includes at least one of: graphics-text type information, contrast type information, clause type information, and other types of information;
the second typesetting type information comprises image-text type information, contrast type information and clause type information;
for the presentation to be converted, first font size values corresponding to a main title type, a subtitle type and a text type are sequentially reduced, image-text type information means that first image attribute information comprises first image position information, contrast type information means that the first image attribute information comprises two first image position information, clause type information means that a first template page comprises three or more first text attribute information belonging to the subtitle type, and other type information means type information different from the image-text type information, the contrast type information and the clause type information.
It is understood that the font size values of the main title, the subtitle, and the body text are sequentially decreased and the word count values of the main title, the subtitle, and the body text contents are generally sequentially increased within the same page of the presentation, and thus, the text type can be analyzed by the font size value or the word count value; when a page of the presentation is provided with a main title, a text and a picture, the type of composition of the presentation belongs to a graph-text type; when a page of the presentation is provided with a main title, two parallel subtitles, two parallel texts and two parallel pictures, the type setting type of the presentation belongs to a contrast type; when a page of the presentation is provided with a main title, a plurality of sub-titles, and a text corresponding to the sub-titles, the type of composition of the presentation belongs to a bar style.
Referring to fig. 4, in an embodiment, step S140 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S410, comparing the first layout type information with the second layout type information, and determining similar template pages from all the second template pages according to the comparison result of the first layout type information and the second layout type information;
and step S420, performing matching processing on the first object information and the second object information, and determining a target page from all similar template pages according to a matching result between the first object information and the second object information.
It can be understood that the second template page with the same type as the type of the presentation to be converted is determined to be the same type template page, the first object information and the second object information of the same type template page are matched, and the same type template page closest to the feature information of the presentation to be converted is determined to be the target page; the similar template pages are screened out firstly, and in the matching processing stage, because the number of the similar template pages is smaller than that of the second template pages, the matching processing times can be reduced, and compared with the determination of the similar template pages from each second template page, the time consumption for matching the first object information and the second object information is longer, so that the matching efficiency can be effectively improved by reducing the matching processing times, and the conversion efficiency of the presentation is improved.
As shown in fig. 5, in an embodiment, step S420 in the embodiment shown in fig. 4 includes, but is not limited to, the following steps:
step S510, merging the presentation to be converted and each similar template page to obtain a merged page corresponding to each similar template page;
step S520, aiming at each merged page, determining a first matching value according to the first text position information and the second text position information, and determining a second matching value according to the first picture position information and the second picture position information;
step S530, determining the matching result of each similar template page and the presentation to be converted according to the first matching value and the second matching value;
and step S540, determining a target page from each similar template page according to the matching result of each similar template page and the presentation to be converted.
It can be understood that, merging the presentation to be converted and each of the similar template pages means that the content of the presentation to be converted is completely copied to each of the similar template pages to obtain a corresponding merged page, in the merged page, the first text position information and the second text position information are analyzed to obtain a first matching value for representing the text matching degree through calculation, and the first picture position information and the second picture position information are analyzed to obtain a second matching value for representing the picture matching degree through calculation; and then adding the first matching value and the second matching value to obtain a total matching value, namely a matching result corresponding to each similar template page, selecting the similar template page with the highest total matching value as a target page, effectively screening the similar template page closest to the characteristic information of the presentation to be converted, avoiding the content of the presentation to be converted from being greatly changed in the conversion process, increasing the readability of the target presentation and improving the conversion quality of the presentation.
In specific practice, a first text coordinate of a text box center where a text is located in a page of a presentation is determined through first text position information, a second text coordinate of the text box center where the text is located in the page of the presentation is determined through second text position information, then a first Euclidean distance between the first text coordinate and the second text coordinate is calculated, and a first matching value is further determined, wherein the first matching value and the first Euclidean distance are in a negative correlation relationship, and the smaller the value of the first Euclidean distance is, the larger the first matching value is represented; and determining a first picture coordinate of the picture center in a page of the presentation according to the first picture position information, determining a second picture coordinate of the picture center in the page of the presentation according to the second picture position information, calculating a second Euclidean distance between the first picture coordinate and the second picture coordinate, and further determining a second matching value, wherein the second matching value and the second Euclidean distance form a negative correlation relationship, and the smaller the value of the second Euclidean distance, the larger the second matching value is represented.
It should be noted that the method for calculating the euclidean distance between two coordinates belongs to the techniques well known to those skilled in the art, and will not be described herein.
As shown in fig. 6, in an embodiment, before step S410 in the embodiment shown in fig. 4, the following steps are further included, but not limited to:
and step S610, when the first layout type information is other types of information, changing the first layout type information into the image-text type information.
It can be understood that the first layout type information is other types of information, which represents that the second layout type information does not conform to the first layout type information, and the first layout type information is changed into general image-text type information, so that effective conversion of the presentation to be converted can be ensured.
In addition, referring to fig. 7, the present application also provides a presentation conversion apparatus 700 including:
an obtaining unit 710, configured to obtain a presentation to be converted, a target document style tag, and multiple first template pages;
the analysis unit 720 is configured to perform style identification processing on each first template page to obtain a first document style tag of each first template page, perform comparison processing on the first document style tag and a target document style tag, and determine a second template page from the plurality of first template pages according to a comparison result between the first document style tag and the target document style tag;
the recognition unit 730 is configured to perform image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and perform image recognition processing on each second template page to obtain second object information of each second template page;
a matching unit 740, configured to perform matching processing on the first object information and the second object information, and determine a target page from each second template page according to a matching result between the first object information and the second object information;
a generating unit 750 for generating a target presentation according to the first object information and the target page.
It can be understood that the specific implementation of the presentation transformation apparatus 700 is substantially the same as the specific implementation of the presentation transformation method, and is not described herein again; based on the method, the style identification processing is utilized to identify and obtain the first manuscript style label of each first template page, the first manuscript style label and the target manuscript style label are compared, the second template page meeting the manuscript style requirement is determined according to the comparison result, then the image identification processing is utilized to identify and obtain the first object information of the presentation to be converted and identify and obtain the second object information of each second template page, then the first object information and the second object information are matched, the target page suitable for the presentation to be converted is determined according to the matching result, and then the target presentation is generated.
In addition, referring to fig. 8, fig. 8 illustrates a hardware structure of an electronic device of another embodiment, the electronic device including:
the processor 801 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present Application;
the Memory 802 may be implemented in a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 802 may store an operating system and other application programs, and when the technical solution provided by the embodiment of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 802, and the processor 801 is used to call and execute the presentation transformation method according to the embodiment of the present application, for example, execute the above-described method steps S110 to S150 in fig. 1, method steps S210 to S230 in fig. 2, method steps S310 to S360 in fig. 3, method steps S410 to S420 in fig. 4, method steps S510 to S540 in fig. 5, and method step S610 in fig. 6;
an input/output interface 803 for realizing information input and output;
the communication interface 804 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, and the like) or in a wireless manner (such as mobile network, WIFI, bluetooth, and the like);
a bus 805 that transfers information between the various components of the device (e.g., the processor 801, memory 802, input/output interfaces 803, and communication interface 804);
wherein the processor 801, the memory 802, the input/output interface 803 and the communication interface 804 are communicatively connected to each other within the device via a bus 805.
An embodiment of the present application further provides a storage medium, which is a computer-readable storage medium and is used for computer-readable storage, where the storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the above-mentioned presentation transformation method, for example, to perform the above-described method steps S110 to S150 in fig. 1, method steps S210 to S230 in fig. 2, method steps S310 to S360 in fig. 3, method steps S410 to S420 in fig. 4, method steps S510 to S540 in fig. 5, and method step S610 in fig. 6.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The presentation conversion method, the device, the equipment and the storage medium provided by the embodiment of the application acquire a presentation to be converted, a target document style tag and a plurality of first template pages; performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with a target manuscript style tag, and determining a second template page from the plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag; carrying out image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and carrying out image recognition processing on each second template page to obtain second object information of each second template page; matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information; generating a target presentation according to the first object information and the target page; based on the method, the style identification processing is utilized to identify and obtain the first manuscript style label of each first template page, the first manuscript style label and the target manuscript style label are compared, the second template page meeting the manuscript style requirement is determined according to the comparison result, then the image identification processing is utilized to identify and obtain the first object information of the presentation to be converted and identify and obtain the second object information of each second template page, then the first object information and the second object information are matched, the target page suitable for the presentation to be converted is determined according to the matching result, and then the target presentation is generated.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute limitations on the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technologies and the emergence of new application scenarios.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1 to 6 do not constitute a limitation of the embodiments of the present application, and may comprise more or less steps than those shown, or some steps may be combined, or different steps.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in this application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A presentation transformation method, the method comprising:
acquiring a presentation to be converted, a target document style label and a plurality of first template pages;
performing style recognition processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with the target manuscript style tag, and determining a second template page from a plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag;
performing image recognition processing on the presentation to be converted to obtain first object information of the presentation to be converted, and performing image recognition processing on each second template page to obtain second object information of each second template page;
matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information;
and generating a target presentation according to the first object information and the target page.
2. The method according to claim 1, wherein the first object information includes a number of first text attribute information, a number of first image attribute information, and first layout type information, and the second object information includes a number of second text attribute information, a number of second image attribute information, and second layout type information;
the image recognition processing of the presentation to be converted to obtain first object information of the presentation to be converted, and the image recognition processing of each second template page to obtain second object information of each second template page includes:
performing optical character recognition on the presentation to be converted to obtain the first text attribute information, and performing optical character recognition on each second template page to obtain the second text attribute information;
performing image recognition on the presentation to be converted to obtain the first image attribute information, and performing image recognition on each second template page to obtain the second image attribute information;
and determining the first typesetting type information according to the first text attribute information and the first image attribute information, and determining the second typesetting type information according to the second text attribute information and the second image attribute information.
3. The method of claim 2, wherein the first text attribute information comprises first text position information, first text content information, a first word size value, and a first word value, wherein the first image attribute information comprises first picture position information, wherein the second text attribute information comprises second text position information, second text content information, a second word size value, and a second word value, wherein the second image attribute information comprises second picture position information;
the determining the first layout type information according to the first text attribute information and the first image attribute information, and determining the second layout type information according to the second text attribute information and the second image attribute information includes:
performing semantic recognition processing on the first text content information to obtain first semantic information;
determining the text type of each first text attribute information according to the first text position information, the first semantic information, the first word size value and the first word numerical value;
determining the first layout type information according to the text type of the first text attribute information, the first text position information and the first picture position information;
performing semantic recognition processing on the second text content information to obtain second semantic information;
determining the text type of each second text attribute information according to the second text position information, the second semantic information, the second word size value and the second word numerical value;
and determining the second typesetting type information according to the text type of the second text attribute information, the second text position information and the second picture position information.
4. The method of claim 3, wherein the text type comprises at least one of: a main title type, a subtitle type, and a text type;
the first layout type information includes at least one of: graphics-text type information, contrast type information, clause type information, and other types of information;
the second typesetting type information comprises image-text type information, contrast type information and clause type information;
for the presentation to be converted, the first font size values corresponding to the main title type, the subtitle type and the text type are sequentially reduced, the image-text type information means that the first image attribute information includes one piece of first image position information, the contrast type information means that the first image attribute information includes two pieces of first image position information, the bar style information means that the first template page includes three or more pieces of first text attribute information belonging to the subtitle type, and the other types of information means type information different from the image-text type information, the contrast type information and the bar style information.
5. The method according to claim 4, wherein the matching the first object information and the second object information, and determining a target page from each second template page according to a matching result between the first object information and the second object information comprises:
comparing the first layout type information with the second layout type information, and determining similar template pages from the second template pages according to the comparison result of the first layout type information and the second layout type information;
and matching the first object information and the second object information, and determining a target page from all the similar template pages according to a matching result between the first object information and the second object information.
6. The method according to claim 5, wherein the performing matching processing on the first object information and the second object information, and determining a target page from each of the homogeneous template pages according to a matching result between the first object information and the second object information comprises:
merging the presentation to be converted and each similar template page to obtain a merged page corresponding to each similar template page;
for each merged page, determining a first matching value according to the first text position information and the second text position information, and determining a second matching value according to the first picture position information and the second picture position information;
determining a matching result of each similar template page and the presentation to be converted according to the first matching value and the second matching value;
and determining a target page from each similar template page according to the matching result of each similar template page and the presentation to be converted.
7. The method according to claim 5, wherein before the step of comparing the first layout type information with the second layout type information and determining the similar template pages from the second template pages according to the comparison result between the first layout type information and the second layout type information, the method further comprises:
and when the first layout type information is the other types of information, changing the first layout type information into the image-text type information.
8. A presentation conversion apparatus, comprising:
the system comprises an acquisition unit, a conversion unit and a display unit, wherein the acquisition unit is used for acquiring a presentation to be converted, a target document style label and a plurality of first template pages;
the analysis unit is used for performing style identification processing on each first template page to obtain a first manuscript style tag of each first template page, comparing the first manuscript style tag with the target manuscript style tag, and determining a second template page from a plurality of first template pages according to a comparison result between the first manuscript style tag and the target manuscript style tag;
the identification unit is used for carrying out image identification processing on the presentation to be converted to obtain first object information of the presentation to be converted, and carrying out image identification processing on each second template page to obtain second object information of each second template page;
the matching unit is used for matching the first object information and the second object information and determining a target page from each second template page according to a matching result between the first object information and the second object information;
and the generating unit is used for generating a target presentation according to the first object information and the target page.
9. An electronic device, characterized in that the electronic device comprises a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for realizing connection communication between the processor and the memory, the program, when executed by the processor, realizing the steps of the presentation conversion method according to any one of claims 1 to 7.
10. A storage medium that is a computer-readable storage medium for a computer-readable storage, characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the presentation conversion method according to any one of claims 1 to 7.
CN202210687460.0A 2022-06-17 2022-06-17 Presentation file conversion method, device, equipment and storage medium Pending CN115034177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210687460.0A CN115034177A (en) 2022-06-17 2022-06-17 Presentation file conversion method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210687460.0A CN115034177A (en) 2022-06-17 2022-06-17 Presentation file conversion method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115034177A true CN115034177A (en) 2022-09-09

Family

ID=83125721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210687460.0A Pending CN115034177A (en) 2022-06-17 2022-06-17 Presentation file conversion method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115034177A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117216586A (en) * 2023-09-12 2023-12-12 北京饼干科技有限公司 Method, device, medium and equipment for generating presentation template

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117216586A (en) * 2023-09-12 2023-12-12 北京饼干科技有限公司 Method, device, medium and equipment for generating presentation template

Similar Documents

Publication Publication Date Title
US10282643B2 (en) Method and apparatus for obtaining semantic label of digital image
CN109635805B (en) Image text positioning method and device and image text identification method and device
CN109492177B (en) web page blocking method based on web page semantic structure
CN111243061B (en) Commodity picture generation method, device and system
US20230206670A1 (en) Semantic representation of text in document
CN112269872A (en) Resume analysis method and device, electronic equipment and computer storage medium
CN114821590A (en) Document information extraction method, device, equipment and medium
CN111680491A (en) Document information extraction method and device and electronic equipment
CN115034177A (en) Presentation file conversion method, device, equipment and storage medium
CN111881900B (en) Corpus generation method, corpus translation model training method, corpus translation model translation method, corpus translation device, corpus translation equipment and corpus translation medium
US20220301285A1 (en) Processing picture-text data
CN115909449A (en) File processing method, file processing device, electronic equipment, storage medium and program product
CN115270711A (en) Electronic signature method, electronic signature device, electronic apparatus, and storage medium
CN116306506A (en) Intelligent mail template method based on content identification
CN115294594A (en) Document analysis method, device, equipment and storage medium
CN114373068A (en) Industry-scene OCR model implementation system, method and equipment
CN113936186A (en) Content identification method and device, electronic equipment and readable storage medium
CN112101356A (en) Method and device for positioning specific text in picture and storage medium
CN114399782B (en) Text image processing method, apparatus, device, storage medium, and program product
CN116304163B (en) Image retrieval method, device, computer equipment and medium
CN113900602B (en) Intelligent printing method and system for automatically eliminating target object filling information
CN114138214B (en) Method and device for automatically generating print file and electronic equipment
CN115063819A (en) Information extraction method, information extraction system, electronic device and storage medium
CN117037170A (en) Image information extraction method and device, electronic equipment and storage medium
CN114610634A (en) Product design walkthrough method and device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination