NL2031543B1 - Method and device for processing image data - Google Patents

Method and device for processing image data Download PDF

Info

Publication number
NL2031543B1
NL2031543B1 NL2031543A NL2031543A NL2031543B1 NL 2031543 B1 NL2031543 B1 NL 2031543B1 NL 2031543 A NL2031543 A NL 2031543A NL 2031543 A NL2031543 A NL 2031543A NL 2031543 B1 NL2031543 B1 NL 2031543B1
Authority
NL
Netherlands
Prior art keywords
data
source
image
item
processing
Prior art date
Application number
NL2031543A
Other languages
Dutch (nl)
Inventor
Boxma Hendrik
Original Assignee
Boxma It B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boxma It B V filed Critical Boxma It B V
Priority to NL2031543A priority Critical patent/NL2031543B1/en
Application granted granted Critical
Publication of NL2031543B1 publication Critical patent/NL2031543B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes

Abstract

Mark-up data is combined With source data to generate a display of a user interface, like a webpage, in a mark-up language as an image source file. The source data may comprise text and/or images or other content items. With the mark-up data, unique elements with unique identifiers are provided, comprising content items or references thereto and data how they are to be marked up. By analysing data in the image source file, locations of marked-up content items in an image to be shown on a screen may be identified. Based thereon, particular section of the image may be processed, by modifying mark-up data or modifying data in a pixel domain. This may be combined With replacing original content items With other content items, for example for one or more different languages.

Description

P132044NL00
Title: Method and device for processing image data
TECHNICAL FIELD
The various aspects and implementations thereof relate to electronically automated processing of image data. The image data may relate to a graphic user interface.
BACKGROUND
As companies operate on an international scale, communication in multiple languages is preferred. Multiple languages may have to be used for man-machine interfaces and written communication. Man-machine interfaces may be interfaces directly provided on machinery, but also by means of webpages. And the written communication may be provided on actual paper, but also as webpages or as data otherwise provided on a computer screen.
SUMMARY
As some computer software or other computer implemented procedures require further explanation of their use to some people and in view of various language requirements - requirements to operate in conjunction with various languages - it is preferred to provide particular screen images of the user interface, in various versions with the various languages for the documentation. And in those images, sometimes editing may be required. As particular parts of text may have different lengths, varying with the applicable language, an area to be edited may vary in size.
It is preferred to provide a method for automated editing of marked up text data of one and the same user interface, for multiple languages, taking into account the varying sizes of the text.
A first aspect provides, in an electronic image processing device, a method of processing image data. The method comprises receiving a source data set comprising source items, receiving mark-up data identifying mark- up items associated with the source data set, receiving target data associated with a selected mark-up item comprised by the mark-up data and receiving processing data. As an example, the source data may comprise text, images and other content data. The mark-up data may specify how the content data is to be marked up, structured, placed on a screen or otherwise be processed before generating output visual data. This may be specified by
XML, HTML, RTF or another standard for marking up data. By means of the mark-up data, elements may be defined as marked-up text, marked-up images or other marked-up content items, data structures like tables or (numbered) lists, other, or a combination of two or more thereof. One content item, like a word or a text string, may occur one or more times as an element. This means that one unique content item may be processed into two, each unique, elements. Each unique object - content item or element - may be provided with unique identifiers. These may be provided from one set or a set of unique content item identifiers may be provided and a set with unique element identifiers may be provided.
The method further comprises generating an image file representing an image frame, based on the source data set and the mark-up data and identifying, in the image frame, a target location of an image item associated with the selected mark-up item. The image file may be a pixel- based file, in accordance with for example the JPG standard, it may be a vector drawing based file, it may be a file comprising alphanumerical data defining both content data like text and data as how to mark up the text. As an example of the latter, the image file may be a so-called HTML file.
As a further example, an image item may be an element that may be defined by means of the mark-up data. The target location may be a particular extremity of the visualised element, like a particular corner, a particular edge or another particular extremity. Alternatively or additionally, the target location may be defined as a centre of the element,
an area occupied by the element in the visualisation, other, or a combination of two or more thereof.
Based on the target location and the target data, a target area in the image frame is identified. The target area may for example be defined as an area covering one or more target locations, covering one or more elements identified by means of the target data. In one example, the target area is a minimal area, for example rectangular, triangular, otherwise polygonal, circular or otherwise curved, covering one or more elements - as defined by the mark-up data.
The method further comprises processing data comprised by the image file data related to the target area in accordance with the processing data to provide a processed image frame and providing the processed image file. The processing may comprise one or more individual processing operations, applied to one or more target locations. For example, one operation may be cropping the processed image file, blurring particular content, obfuscating particular content, highlighting particular content, hiding or omitting particular elements from the visualisation, other, or a combination of two or more thereof. The processing may be applied to data in the pixel domain - for example in a bitmap or compress bitmap file -, in the mark-up domain - for example in an HTML file -, otherwise, or a combination of two or more thereof. For example, a first operation may involve amending HTML coding resulting in displaying of an element with content in bold letters. A second operation may take place after the HTML mark-up data has been processed and an image file has been generated, where the second operation may involve cutting out a particular part of the image.
In one implementation, the method further comprises receiving a base source data set comprising base references. In this implementation, generating the image file comprises generating, based on the base source data set and the mark-up data, an image source file describing a graphical representation of the base source data set in accordance with the mark-up data, the source items have source references associated therewith, the source references corresponding to the base references and generating the image file further comprises merging the base image source file and the source data set based on the correspondence between the source references and the base references to provide a merged image source file as a generated image file.
In this implementation, it is possible to for example generate an
HTML file as discussed above, with base text content data. The text data may be provided, for example, in a JSON file. The text may be provided with unique identifiers for each object in the base source file, either apart from the text or in the same field as the text. During the merging step, the text from the base source file may be replaced by text from another source file. In one example, the base source file may comprise Dutch text and the other source data set may comprise text in Cantonese, Gujarati, Frisian or another language. In the merging step, based on the source reference, the
Dutch word for, for example, 'address' may be replaced by the Cantonese word of 'address’. This replacing is preferable done in the image source file in a data format in HTML format or a format equivalent thereto.
In one example, this process is automated such that with a single selection action results in generating multiple merged image source files, one for each language, and multiple processed image files, all processed for insertion in, for example, a user manual.
In a further implementation, the processing further comprises at least one of processing the merged image source file and generate the image frame based on the processed merged image source file; and generating the image frame based on the merged image source file and processing the image frame. As discussed above, processing may be executed based in the pixel domain and/or in the mark-up language domain.
In another implementation, the mark-up data comprises the mark-up items associated with source items; and the mark-up items comprising data for marking up source 1tems. In this example, the method further comprising marking up the source items in accordance with the 5 associated mark-up items. The mark-up data may use a source item at more than one occurrence, resulting in multiple - unique - marked-up elements based on one source item. Each marked-up element may be marked up in the same or in a different way.
In again another implementation, the mark-up data items have mark-up data item identifiers associated with them and the mark-up items are associated with the source item identifiers. This facilitates convenient marking up.
In yet a further implementation further comprises receiving a multitude of source data sets, each data set comprising text data in different from text data in other data set. In this implementation, source items in different sets to be marked up in accordance with the same mark-up data item are associated with the same source item identifier. This example supports conveniently generating images of the same user interface in different languages, as discussed above.
In again a further implementation, the target data is associated with at least one source item identifier. In another implementation, target data is associated with identifiers of mark-up data items or marked-up elements.
Another implementation further comprises checking whether the processing data comprises processing mark-up data and modifying modify the received mark-up data in accordance with the processing mark-up data if the processing data comprises processing mark-up data. In this implementation, the generating of the image file is based on the amended mark-up data. As discussed above, the processing operations may be in the mark-up language domain, as well as in the pixel domain.
In again a further implementation, the processing comprises providing further mark-up data; and the method further comprises generating the image frame based on the source data set the mark-up data and the further mark-up data. As indicated above, the procession operations may be executed in the mark-up language domain.
Yet another implementation further comprises receiving image source data comprising image source items, receiving image mark-up data associated with the image source data and generating the image frame based on the source data set, the image source data, the mark-up data and the image mark-up data. Such additional image data or other type of object data may be an arrow, a rectangle, either transparent or not, another image, blur, other, or a combination of two or more thereof.
In another implementation, the mark-up items are, in the mark- up data, associated with mark-up item identifiers and the target data comprises at least one target identifier matching at least one mark-up item; and the processing of data comprised by the image file data comprises applying the processing to a marked up item being associated with a mark- up item identifier matching the target identifier. Using identifiers aids the processing operation.
In yet a further implementation, the source items are, in the source data set, associated with source item identifiers and the mark-up data comprises source mark-up identifiers associated with mark-up items; and the generating of the image file comprises marking up at least one source item having a source item identifier associated therewith based on a mark-up item having a source mark-up identifier associated therewith matching the source item identifier. Using identifiers aids the marking up operation.
A second aspect provides an image processing device for processing image data comprising an input module, a processing module and an output module, the processing module being arranged to receive, via the input unit, a source data set comprising source items; receive, via the input unit, mark-up data identifying mark-up items associated with the source data set; receive, via the input unit, target data associated with a selected mark-up item comprised by the mark-up data; receive, via the input unit, processing data; generate an image file representing an image frame, based on the source data set and the mark-up data; identify, in the image frame, a target location of an image item associated with the selected source item; identify, based on the target location and the target data, a target area in the image frame; process data comprised by the image file data related to the target area in accordance with the processing data to provide a processed image frame and provide, via the output unit, the processed image file.
A third aspect provides a computer program product comprising computer executable code causing a computer, when loaded in a memory operatively connected to a processing unit of the computer, to execute a method of: receiving a source data set comprising source items, receiving mark-up data associated with the source data set, receiving target data associated with a selected source item comprised by the source data set, receiving processing data, generating an image frame based on the source data set and the mark-up data, identifying, in the image frame, a target location of an image item associated with the selected source item, based on the target location and the target data, identifying a target area in the image frame, processing the target area in accordance with the processing data to provide a processed image frame and providing the processed image frame.
A fourth aspect provides a non-transitory medium having store thereon computer executable code causing a computer, when loaded in a memory operatively connected to a processing unit of the computer, to execute a method of receiving a source data set comprising source items, receiving mark-up data associated with the source data set, receiving target data associated with a selected source item comprised by the source data set, receiving processing data, generating an image frame based on the source data set and the mark-up data, identifying, in the image frame, a target location of an image item associated with the selected source item, based on the target location and the target data, identifying a target area in the image frame, processing the target area in accordance with the processing data to provide a processed image frame, and providing the processed image frame.
BRIEF DESCRIPTION OF THE DRAWINGS
The various aspects and implementations thereof will now be discussed in further detail in conjunction with Figures. In the Figures:
Figure 1: shows an electronic data processing system;
Figure 2: shows a first flowchart; and
Figure 3: shows a first schematic view of data files, processing of the data files and resulting images.
Figure 4: shows a second flowchart; and
Figure 5: shows a second schematic view of data files, processing of the data files and resulting images.
DETAILED DESCRIPTION
Figure 1 shows an electronic data processing system 100 as an example of the first aspect. The data processing system 100 comprises a data processing device 110, a keyboard 122 and a mouse 124 as examples of input devices and an electronic display screen 114 as output device. the data processing device 110 comprises a central processing unit 112, a storage unit 114, an output controller 116 and an input controller 118. The data processing device 110 may further comprise a network controller for communicating with other data processing devices. In another example, a touch screen is provided as an input device and an output device in one.
The electronic display screen 130 shows a webpage as an example of a graphic user interface. The webpage comprises header text 132, a menu bar 134 with menu items and content text 136. The header text 132, the text of the menu bar 134 and the content text 136 are provided in a first language, for example Armenian. As companies tend to operate beyond borders, in different regions where different languages are spoken, it is preferred to present one and the same website in different languages; the owner of the website may also want to present the content of the website in
Pashtun, Swahili, Gujarati and Cantonese.
As some computer software or other computer implemented procedures require further explanation of their use to some people and in view of various language requirements - requirements to operate in conjunction with various languages - it is preferred to provide particular screen images of the user interface, in various versions with the various languages for the documentation. And in those images, sometimes editing may be required. As particular parts of text may have different lengths, varying with the applicable language, an area to be edited may vary in size.
Figure 2 shows a first flowchart 200 depicting a process that may be executed by the data processing system 100 in general and the data processing device 110 in particular. The process depicted by the first flowchart 200 is arranged to edit different user interfaces like websites to provide edited images of websites in multiple languages. The different edited images may for example be used in user manuals for the user interface in different languages.
The procedure depicted by the first flowchart 200 will be discussed in conjunction with Figure 1 and Figure 3. Figure 3 shows various datasets used and procedure by the procedure depicted by the first flowchart 200. The various items of the first flowchart 200 may be executed in series or in parallel and the order of process items may be swapped, unless explicitly indicated otherwise below. The various parts of the procedure may be summarised as follows: 202 start procedure 204 receive mark-up data 206 receive target data 208 receive processing data 210 receive source data set 212 generate image data 214 provide image 216 identify target locations 218 identify target area 220 process target image data 222 processing done for image? 224 provide output image 226 all languages done? 228 end procedure 232 next processing step 234 next source data set
The procedure starts in a terminator 202 and proceeds to step 204 in which mark-up data 310 is received. More in particular, the mark-up data 310 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The mark-up data 310 comprises data how various content items are to be marked up to provide a user interface. In the mark-up data 310, five reference identifiers a, b, ¢, d and e are provided, each with a mark-up data item indicating a mark-up action. The reference identifiers provide identifiers that may be used to link text items, mark-up data, target data and processing data, as discussed below.
The mark-up data 310 may indicate a font in which a particular text item 1s to be presented, whether the text is to be presented 1n italic, bold, or a combination thereof, a colour of the text, a font size, a position of the text item, a hyperlink or other action associated with the text item, other, or a combination thereof. Furthermore, the mark-up may also define insertion of an image file, a structure of a menu like a drop down selection box, a table, other, or a combination of two of more thereof.
In step 206, target data 322 is received. More in particular, the target data 322 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The target data 322 provides information on particular targets for processing. The targets may be text items or areas determined by one, two or more text items. This will be discussed below in further detail.
In step 208, processing data 324 is received. More in particular, the processing data 324 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The processing data 324 provides information on what processing is to be executed on a particular target. With the processing data 324 being related to a particular target and the target being identified by one or more reference identifiers, the processing data 324 is indirectly linked to mark-up data. Optionally, the target data 322 and the editing data 324 are provided in an editing file 320.
In step 210, a first source dataset 302 is received. More in particular, the first source data set 302 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The first source dataset 302 comprises in this example five text items that are each identified with a reference identifier and one of the reference identifiers a, b, ¢, d and e in particular. As such text items, mark- up data, processing data and target data are linked by means of the reference identifiers, either directly or indirectly.
Figure 3 depicts three source datasets; the first source dataset 302, the second source dataset 304 and the third source dataset 306. In this example, each source dataset provides text items in a particular language.
In this example, the first source dataset 302 comprises text elements in
Cantonese, the second source dataset 304 comprises text elements in
Gujarati and the third source dataset 306 comprises text elements in
Afrikaans. Each source dataset comprises five text items, each referenced by one of the reference identifiers a, b, ¢, d and e. In this example, each text item referenced by means of the same reference identifier has the same meaning, in different languages.
As languages differ from one another, different texts in different languages having the same meaning may vary in length. This means that different text items with the same reference identifier may have different lengths. As depicted by Figure 3, a first text item in Cantonese referenced by identifier a in the first source dataset 302 is shorter than a second text item referenced by identifier a in the second source dataset 304 with text in
Gujarati. In Afrikaans, provided by a third text item referenced by identifier a in the third source dataset 306, is as long as in Cantonese.
In step 212, image data of a user interface is generate for a particular language, in this example in Cantonese. The user interface is in this example generated by means of a headless browser 332 comprised by an image data parser 330. The headless browser 332 marks up text items comprised by the first source dataset 302, using the mark-up data 310.
With each text item being associated with a mark-up item by means of the reference identifiers, each text item of the first source dataset 302 is marked up. By marking up the text items accordingly, an image is provided by the headless browser 332. The image thus generated may be provided in various formats; as a bitmap, as a vector file, as an HTML file or file having an equivalent marking up, another format or a combination thereof. The data may be provided uncompressed, or compressed, either lossy or lossless. The image thus generated by the headless browser 332 1s provided to an image processor 334 that is also comprised by the image data parser 330.
In step 216, the image processor 334 identifies target locations provided by the target data 322. In this example, first target locations are provided as a first item by ab. This means that the first target locations are the locations of the text items in the image file, which text items are associated with the first reference identifier a and the second reference identifier b. A second target location is the location of text item a - again - and third target locations are the locations of text item identified by reference identifier c and text item identified by reference identifier d.
In step 218, target areas are identified. In this example, the target areas are identified by means of target locations. More in particular, a first area is identified by a first pair of reference identifiers ab - the first item of the target data 322 - and a second pair of reference identifiers cd.
With the first pair of reference identifiers ab, a rectangle determined by the first text item with reference identifier a on one hand and by the second text item with reference identifier b on the other hand.
Definitions of target areas are depicted in the first image 342, as provided by the headless browser 332. The first target area is depicted by means of the dashed lines, with the upper left corner being identified by the upper left corner of the first text item referenced by reference identifier a and the lower right corner being identified by the lower right corner of the second text item referenced by the reference identifier b. In another implementation, the first text item may be provided further to the right than the second text item, in which case the upper right corner of the first text item determines the upper right corner of the target area and lower left corner of the second text item determines the lower left corner of the target area.
More in general, in such implementation of this aspect, the target area is determined as having the minimal size such that the first text item and the second text item are comprised by the target area - based upon the image data. In another example, the target area is determined as having the minimal size such that more than two items - text, images and/or other objects - are fit within the target area, such depending on the target data 322. The target area may also be defined by a single reference identifier, in which case the target area relates to the area the text item occupies in the first image 342.
In step 220, processing is applied to a target area. For the first target area defined by the text items identified by the first reference identifier a and the second reference identifier b, the processing action is a cropping action: the first image 342 provided by the headless browser is cut to the size indicated by the dashed line.
In step 222 is checked whether all target areas have been processed. If this is not the case, the next target area and the associated processing step are identified in step 232 and the procedure branches back to step 216, in which one or more target locations that identify the next target area are identified. In this example, two additional and further target areas are identified by the target data 322. A second target area is identified as the area occupied by the marked up text item with reference identifier a and a third target area is identified as an area occupied by the marked up text item with reference identifier ¢, the marked up text item with reference identifier d and all image data in between.
With the second target area, processing is associated by providing a frame around the marked up text item associated with the first reference identifier a. With the third target area, defined by positions of the marked up text item associated with the reference identifier c and the reference identifier d, a processing is associated by obfuscating - hiding - image data in the third target area. The hiding may be executed in several ways, for example by blurring data, providing alternative image data or by cutting out image data, leaving, black, white, transparent or by providing an otherwise homogenously coloured area. In this particular example, for Cantonese, the loop provided with process item 232 15 run twice, with two times branching back.
After the processing has been executed for Cantonese in the first image 342, a first processed image 352 is provided in step 224. In step 226 is checked whether all languages - all source data sets - have been processed.
If this is not the case, the process branches back to step 210, while in step 234 another - the next - source data file is selected. In this example, that means that the second source data file 304 with text in Gujarati is selected.
The second source data file 304 is processed in accordance with the mark-up data 310, providing a second image 344. The second image 344 is processed in accordance with the processing data 324, yielding a second processed image 354. Likewise, a third image 346 and a third processed image 356 1s provided for Afrikaans in a third loop.
With the third text item and the fourth text item in Cantonese taking up, in marked up form, less space than in Gujarati and more than in
Afrikaans, the obfuscated area takes up more space in the second processed image 354 than in the first processed image 352 and less space in the third processed image than in the first processed image 354. With the text items, mark-up data items, target areas and processing data items cross- referenced, either directly or indirectly, by means of the reference identifiers rather than fixed image positions, processing of image may be automated for multiple languages, automatically taking into account differences in length of text items in various languages, while such text items having the same textual meaning. If all source data files have been processed, the procedure moves from step 226 to step 228, at which the procedure ends.
The procedure depicted by the second flowchart 400 will be discussed in conjunction with Figure 1 and Figure 5. Figure 5 shows various datasets used and procedure by the procedure depicted by the second flowchart 400. The various items of the second flowchart 400 may be executed in series or in parallel and the order of process items may be swapped, unless explicitly indicated otherwise below.
The various parts of the procedure may be summarised as follows: 402 start procedure 404 receive mark-up data 406 receive target data 408 receive processing data 410 receive base source data 412 generate base image source file 414 receive source data set 416 merge base image source file and source data set 418 identify target objects 420 identify target locations 422 identify target area 424 process target area data 426 image source file done? 428 generate image frame 430 identify target locations 432 process target area data 434 image frame done? 436 provide image file 438 all languages done? 440 procedure done 452 next source processing step 454 next frame processing step 456 next source data set
The procedure starts in a terminator 402 and proceeds to step 404 in which mark-up data 310 1s received. More in particular, the mark-up data 310 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The mark-up data 310 comprises data how various content items are to be marked up to provide a user interface. In the mark-up data 310, five mark-up item identifiers I, II,
III, IV and V are provided, each with a mark-up data item indicating a mark-up action. The reference identifiers provide identifiers that may be used to link text items, mark-up data, target data and processing data, as discussed below. In this example, the mark-up data 310 comprises five reference identifiers and a, b, ¢, d and e in particular. The reference identifiers allow source data to be associated with the mark-up data. The mark-up item identifiers identify marked-up items in, for example, an
HTML file.
The mark-up data 310 may indicate a font in which a particular text item is to be presented, whether the text is to be presented in 1talic, bold, or a combination thereof, a colour of the text, a font size, a position of the text item, a hyperlink or other action associated with the text item, other, or a combination thereof. Furthermore, the mark-up may also define insertion of an image file, a structure of a menu like a drop down selection box, a table, other, or a combination of two of more thereof.
In step 406, target data 322 is received. More in particular, the target data 322 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The target data 322 provides information on particular targets for processing. The targets may be text items or areas determined by one, two or more text items. This will be discussed below in further detail.
In step 408, processing data 324 is received. More in particular, the processing data 324 is received by the central processing unit 112 as retrieved from the memory unit 114 or from another data source. The processing data 324 provides information on what processing is to be executed on a particular target. With the processing data 324 being related to a particular target and the target being identified by one or more reference identifiers, the processing data 324 is indirectly linked to mark-up data. Optionally, the target data 322 and the editing data 324 are provided in an editing file 320.
In step 410, a base source data file 502 is received. The base source data file 502 comprises base source data items that are in this case identified with #, &, $, @ and _. Each of the data objects is tagged with a base source reference, which are a, b, ¢, d and e, respectively. The base source references may be included in one string, together with the applicable base data item or provided in a separate field in a record together with a further field comprising the applicable base source item.
The base source data items as well as the base source references may comprise text strings in a particular language, additional reference data, other, or a combination thereof. In one example, the base source data items comprise text data and further base source reference data based on the base source references. In this example, the further base source reference data is derived from the base source references by means of hashing. As such, in practice, the further base source reference data may also be considered as base references.
In step 412, a base image source file 512 is generated based on the base source data file 502 and the mark-up data 310. The base image source file is in one example a file providing mark-up data and data to be marked up, for example in HTML format, xml format or a similar type of formatting.
As such, the base image source file 512 has data in it for generating an image, based on source data like text and mark-up data indicating how the text is to be provided in the image from a graphical point of view. In step 414, the first source data set 302 is received. The base image source file 512 has in the implementation shown in Figure 5 five elements, each having element identifiers as indicated in the mark-up data 310, i.e. roman numerals I through V. In this implementation, each element has text associated with it. In another implementation, one or more elements may have an image or a structural element like a drop-down menu or a table associated with it.
The base image source file 512 may, in particular with respect to height and width, also be based on a size of the electronic display screen 130 or another display definition. Such display definition may be based on a received or pre-defined screen size or window size of, for example, a headless browser.
The first source data set 302 is the same as discussed above in conjunction with Figure 2 and Figure 3. The first source dataset 302 comprises in this example five text items that are each identified with a reference identifier and one of the reference identifiers a, b, ¢, d and ein particular as source references. Further references may be comprised by the five text items. In the example, base source references in the base source file 502 and, after step 412, in the image source file, correspond to source references in the first source data set 302. Such may be the reference identifiers a, b, ¢, d and e, further identifiers provided in the source items - and base source items, as described above -, other, or a combination thereof.
In step 416, the first source data set 302 is merged with the image source file 512. The merging of image source file 512 and the first source data 302 is in an example executed by replacing part or all of base source items by source items comprised by the first source data set 302. The replacing takes place by replacing base source items with a particular identifier by source items having the same or an otherwise corresponding reference. This means that a base source item with a particular base source reference is in the image source file 512 replaced by a source items from the first data set 302 with a particular source reference matching the particular base source reference. As indicated above, such reference may be one of the identifiers a, b, ¢, d or e, or, additionally or alternatively, an identifier comprised by the base source item and the source item.
Once the base image source file 512 and the first source data set 302 are merged as described above, the procedure proceeds to step 418. In step 418, target objects are identified. More in particular, based on the target data 322, corresponding objects are identified in the image source file 512. In one example, the target objects are identified using the headless browser 332, building a webpage from the base image source file 512 as the base image source file 512 is depicted by Figure 5. In this implementation, target objects are identified by means of the mark-up item identifiers, for example as provided in the mark-up data.
In another implementation, target objects may be identified by means of the reference identifiers. However, certain source items may in some examples appear more than once in a user interface, for example a word "confirm" appearing on multiple buttons in the interface. In such case, the base source data file 502 - or the first source data file 302 - comprises the word "confirm" or an equivalent thereof only once and the mark-up data has multiple mark-up items, each with a unique mark-up item identifier and each referring to the same source item using the particular reference identifier of that source item. And in such case, it may be more efficient to use mark-up item identifiers for the target data - and optionally, processing data -, rather than reference identifiers.
With target objects identified, target locations of the target object in the webpage may be identified in step 420. And with the target locations identified, a target area may be identified in step 422. In step 424, based on the processing data 324, processing 15 applied to an identified target area in the base image source file 512 by a mark-up processor 552. The mark-up processor may be comprised by the central processing unit 112, either hardwired or by means of software. The processing may be executed as discussed in conjunction with the first flowchart 200.
After a first target area has been processed or data provided in the target area has been processed, the procedure checks in step 426 whether there are further target areas to be processed or comprising data to be processed. If such is the case, the procedure branches back to step 418, via step 452 in which the next target area and the next processing data is selected.
Once the applicable target areas have been processed, the procedure proceeds to step 428, in which the image processor 334 generates an image frame based on the processed based image source file 512. The data of the image frame may be provided by means of a bitmap, a vector graphic file, other, or a combination thereof. In step 430, target objects are identified in the image frame.
The target objects may be determined by means of image recognition, by means of data comprised by the image source file, by means of data generated earlier by the headless browser 322, other or a combination thereof. In step 432, the locations of the target objects are determined. In the case of using data generated earlier by the headless browser 322, an exact position of a target object may be determined, for example using pixel coordinates. The headless browser used for providing the data may have a screen size or window size as input, by means of which data a total size of the output may be determined. Locations and sizes of elements within the output of the headless browser may as such be determined by the data on screen size or window size as received.
In step 434, based on the locations of one or more target objects, the target areas are determined, The target area may be an area determined by the size taken up by a target object, like the size of rendered text or the size of a frame of the element that is defined as a target object. In step 436, data in the target area of the image frame is processed in accordance with the processing data 324.
In step 438 is checked whether all applicable target areas for the image frame have been processed. If this is not the case, the process branches back to step 428 via step 454, in which the next processing step is selected. If the target areas have been processed, the procedure checks in step 440 whether all available source data sets have been processed. If not all source data sets - all languages - have been processed, the procedure branches back to step 410 via step 456, in which the next source data set is selected. If all source data sets have been selected, the processed image file 1s provided by the central processing unit 112 and the procedure ends in step 442.
It is noted that, in a broader sense, an image source file, either the with base source data set, the first source data set 302 or another source data set, may also be considered as an image file. In such example, the actual dimensions of the image frame determined by the image file depend on a browser and/or a screen used, but data in the image source file 512 describes how the data in the image source file 512 is to be displayed as an image frame on an electronic display screen.
A difference between the image source file 512 and an actual image frame as made above is only provided to explain that the processing of image data may be executed in the source of the image and/or in the image itself. This applies to the first aspect in general and the procedures depicted by the first flowchart 200 and the second flowchart 400 in particular.
In summary, the various aspects and implementation thereof relate to a method in which mark-up data is combined with source data to generate a display of a user interface, like a webpage, in a mark-up language as an image source file. The source data may comprise text and/or images or other content items. With the mark-up data, unique elements with unique identifiers are provided, comprising content items or references thereto and data how they are to be marked up. By analysing data in the image source file, locations of marked-up content items in an image to be shown on a screen may be identified.
Based thereon, particular section of the image may be processed, by modifying mark-up data or modifying data in a pixel domain.
This may be combined with replacing original content items with other content items, for example for one or more different languages.

Claims (20)

ConclusiesConclusions 1. In een elektronische beeldverwerkingsinrichting, een werkwijze om beeldgegevens te verwerken, waarbij de werkwijze omvat: ontvangen van een brongegevensverzameling die bronitems omvat; ontvangen van opmaakgegevens die opmaakitems geassocieerd met de brongegevensverzameling identificeren; ontvangen van doelgegevens geassocieerd met een geselecteerd opmaakitem omvat door de opmaakgegevens; ontvangen van verwerkingsgegevens; genereren van een beeldbestand dat een beeldframe voorstelt, gebaseerd op de brongegevensverzameling en de opmaakgegevens; identificeren, in het beeldframe, van een doellocatie van een beelditem geassocieerd met het geselecteerde opmaakitem; gebaseerd op de doellocatie en de doelgegevens, identificeren van een doelgebied in het heeldframe; gegevens verwerken die omvat worden door de beeldbestandgegevens gerelateerd met het doelgebied in overeenstemming met de verwerkingsgegevens om een verwerkt beeldframe te voorzien; leveren van het verwerkte beeldbestand.1. In an electronic image processing device, a method of processing image data, the method comprising: receiving a source data set including source items; receiving markup data that identifies markup items associated with the source data set; receiving target data associated with a selected markup item included in the markup data; receiving processing data; generating an image file representing an image frame based on the source data set and the formatting data; identifying, in the image frame, a target location of an image item associated with the selected format item; based on the target location and the target data, identifying a target area in the heel frame; process data included in the image file data related to the target area in accordance with the processing data to provide a processed image frame; delivering the processed image file. 2. De werkwijze van conclusie 1, voorts omvattende ontvangen van een basisbrongegevensverzameling omvattende basisreferenties, waarbij: het genereren van het beeldbestand het genereren omvat, gebaseerd op de basisbrongegevensverzameling en de opmaakgegevens, van een beeldbronbestand dat een grafische voorstelling van de basisbrongegevensverzameling beschrijft in overeenstemming met de opmaakgegevens;The method of claim 1, further comprising receiving a basic source data set including basic references, wherein: generating the image file includes generating, based on the basic source data set and the formatting data, an image source file describing a graphical representation of the basic source data set in accordance with the formatting data; de bronitems daarmee geassocieerde bronreferenties hebben, waarbij de bronreferenties corresponderen met de basisreferenties; het genereren van het beeldbestand voorts het samenvoegen omvat van het basisbeeld bronbestand en de brongegevensverzameling gebaseerd op de correspondentie tussen de bronreferenties en de basisreferenties om een samengevoegd beeldbronbestand als een gegenereerd beeldbestand te voorzien.the source items have associated source references, where the source references correspond to the base references; generating the image file further includes merging the base image source file and the source data set based on the correspondence between the source references and the base references to provide a merged image source file as a generated image file. 3. De werkwijze van conclusie 2, waarbij het verwerken voorts ten minste één omvat van: verwerken van het samengevoegde beeldbronbestand en genereren van het beeldframe gebaseerd op het verwerkte samengevoegde beeldbronbestand; en genereren van het beeldframe gebaseerd op het samengevoegde beeldbronbestand en verwerken van het beeldframe.The method of claim 2, wherein the processing further comprises at least one of: processing the merged image source file and generating the image frame based on the processed merged image source file; and generating the image frame based on the merged image source file and processing the image frame. 4. De werkwijze van eender welke van conclusie 1 tot 3, waarbij: de opmaakgegevens de opmaakitems geassocieerd met bronitems omvatten; en de opmaakitems gegevens voor het opmaken van bronitems omvatten; waarbij de werkwijze voorts opmaken van de bronitems omvat In overeenstemming met de geassocieerde opmaakitems.The method of any one of claims 1 to 3, wherein: the markup data includes the markup items associated with source items; and the markup items include source item markup data; wherein the method further includes formatting the source items in accordance with the associated formatting items. 5. De werkwijze van eender welke van conclusie 1 tot 4, waarbij de opmaakgegevensitems daarmee geassocieerde opmaakgegevensitem-identificatiekenmerken hebben en de opmaakitems geassocieerd worden met de bronitem- identificatickenmerken,The method of any one of claims 1 to 4, wherein the format data items have format data item identifiers associated therewith and the format items are associated with the source item identifiers, 6. De werkwijze van eender welke van de voorgaande conclusies, waarbij de bronitems tekstgegevens omvatten.The method of any of the preceding claims, wherein the source items include text data. 7. De werkwijze van conclusie 6 voor zover afhankelijk van conclusie 4 en conclusie 3, voorts omvattende: ontvangen van een veelheid van brongegevensverzamelingen, waarbij elke gegevensverzameling tekstgegevens omvat verschillend van tekstgegevens in een andere gegevensverzameling; waarbij: bronitems in verschillende verzamelingen op te maken In overeenstemming met hetzelfde opmaakgegevensitem worden geassocieerd met hetzelfde bromitem-identificatiekenmerk.The method of claim 6 when dependent on claim 4 and claim 3, further comprising: receiving a plurality of source data sets, each data set comprising text data different from text data in another data set; where: format source items in different collections in accordance with the same format data item are associated with the same brom item identifier. 8. De werkwijze van eender welke van de voorgaande conclusies voor zover afhankelijk van conclusie 5, waarbij de doelgegevens geassocieerd worden met ten minste één bronitem- 1dentificatiekenmerk.The method of any of the preceding claims as dependent on claim 5, wherein the target data is associated with at least one source item identifier. 9. De werkwijze van eender welke van de voorgaande conclusies, voorts omvattende: nagaan of de verwerkingsgegevens verwerkingsopmaakgegevens omvatten; modificeren van de ontvangen opmaakgegevens in overeenstemming met de verwerkingsopmaakgegevens indien de verwerkingsgegevens verwerkingsopmaakgegevens omvatten; waarbij het genereren van het beeldbestand wordt gebaseerd op de gewijzigde opmaakgegevens.The method of any of the preceding claims, further comprising: checking whether the processing data includes processing layout data; modifying the received format data in accordance with the processing format data if the processing data includes processing format data; where the generation of the image file is based on the changed layout data. 10. De werkwijze volgens eender welke van de voorgaande conclusies, waarbij de doelgegevens worden geassocieerd met een verder geselecteerde opmaakitem, waarbij de werkwijze omvat: identificeren van een verdere doellocatie van een verdere beelditems geassocieerd met het verdere geselecteerde opmaakitem; en identificeren van het doelgebied als een rechthoek gedefinieerd door de doelloeatie en de verdere doellocatie.The method of any preceding claim, wherein the target data is associated with a further selected layout item, the method comprising: identifying a further target location of a further image item associated with the further selected layout item; and identifying the target area as a rectangle defined by the target location and the further target location. 11. De werkwijze van conclusie 9 of 10, waarbij de doellocatie een eerste hoek van de rechthoek definieert en de verdere doellocatie cen tweede hoek van de rechthoek definieert, waarbij de eerste hoek diagonaal tegenover de tweede hoek is. The method of claim 9 or 10, wherein the target location defines a first corner of the rectangle and the further target location defines a second corner of the rectangle, the first corner being diagonal to the second corner. 12, De werkwijze van eender welke van de voorgaande conclusies, waarbij het verwerken één omvat van: bijsnijden van het beeldframe tot het doelgebied; vervagen van gegevens in het doelgebied; verdoezelen van gegevens In het doelgebied; toevoegen van een object aan het beeldframe bij de doellocatie.The method of any of the preceding claims, wherein the processing comprises one of: cropping the image frame to the target area; blurring of data in the target area; obfuscation of data In the target area; adding an object to the image frame at the target location. 13. De werkwijze van eender welke van de voorgaande conclusies, waarbij: het verwerken omvat verdere opmaakgegevens te voorzien; en de werkwijze voorts omvat genereren van het beeldframe gebaseerd op de brongegevensverzameling de opmaakgegevens en de verdere opmaakgegevens.The method of any of the preceding claims, wherein: the processing includes providing further formatting data; and the method further includes generating the image frame based on the source data set, the layout data and the further layout data. 14. De werkwijze van eender welke van de voorgaande conclusies, voorts omvattende: ontvangen van beeldbrongegevens omvattende beeldbronitems; ontvangen van beeldopmaakgegevens geassocieerd met de beeldbrongegevens; genereren van het beeldframe gebaseerd op de brongegevensverzameling, de beeldbrongegevens, de opmaakgegevens en de beeldopmaakgegevens.The method of any one of the preceding claims, further comprising: receiving image source data including image source items; receiving image formatting data associated with the image source data; generating the image frame based on the source data set, the image source data, the layout data and the image layout data. 15. De werkwijze van eender welke van de voorgaande conclusies, waarbij het beeldbestand vectorgegevens, bitmap-gegevens en gegevens in een opmaaktaal geassocieerd met ten minste één van tekstgegevens en beeldgegevens omvat.The method of any of the preceding claims, wherein the image file includes vector data, bitmap data and markup language data associated with at least one of text data and image data. 16. De werkwijze van eender welke van de voorgaande conclusies, waarbij: de opmaakitems, in de opmaakgegevens, geassocieerd worden met opmaakitem-identificatiekenmerken en de doelgegevens ten minste één doelidentificatiekenmerk omvatten dat overeenkomt met ten minste één opmaakitem; en het verwerken van gegevens die omvat worden door de beeldbestandgegevens toepassen omvat van het verwerken naar een opgemaakt Item dat geassocieerd wordt met een opmaakitem- identificatiekenmerk dat overeenkomt met het doelidentificatiekenmerk.The method of any of the preceding claims, wherein: the markup items, in the markup data, are associated with markup item identifiers and the target data includes at least one target identifier corresponding to at least one markup item; and processing data included by applying the image file data includes processing to a formatted Item associated with a formatted Item identifier corresponding to the target identifier. 17. De werkwijze van eender welke van de voorgaande conclusies, waarbij: de bronitems, in de brongegevensverzameling, geassocieerd zijn met bronitem-identificatiekenmerken en de opmaakgegevens bronopmaak-identificatiekenmerken geassocieerd met opmaakitems omvatten; en het genereren van het beeldbestand omvat opmaken van ten minste één bronitem dat een daarmee geassocieerd bronitem- identificatiekenmerk heeft gebaseerd op een opmaakitem dat een daarmee geassocieerd bronopmaak-identificatiekenmerk heeft dat overeenkomt met het bronitem-identificatiekenmerk. 18, Een beeldverwerkingsinrichting voor het verwerken van beeldgegevens omvattende een Inputmodule, een verwerkingsmodule en een outputmodule, waarbij de verwerkingsmodule is ingericht om:The method of any of the preceding claims, wherein: the source items, in the source data set, are associated with source item identifiers and the markup data includes source markup identifiers associated with markup items; and generating the image file includes formatting at least one source item that has an associated source item identifier based on a format item that has an associated source format identifier corresponding to the source item identifier. 18, An image processing device for processing image data, comprising an input module, a processing module and an output module, wherein the processing module is arranged to: via de Inputeenheid, een brongegevensverzameling omvattende bronitems te ontvangen; via de inputeenheid, opmaakgegevens die opmaakitems geassocieerd met de brongegevensverzameling identificeren te ontvangen; via de Inputeenheid, doelgegevens geassocieerd met een geselecteerd opmaakitem omvat door de opmaakgegevens te ontvangen; via de mputeenheid, verwerkingsgegevens te ontvangen; een beeldbestand te genereren dat een beeldframe voorstelt, gebaseerd op de brongegevensverzameling en de opmaakgegevens; in het beeldframe, een doellocatie van een beelditem geassocieerd met het geselecteerde bronitem te identificeren; gebaseerd op de doellocatie en de doelgegevens, een doelgebied in het beeldframe te identificeren; gegevens omvat door de beeldbestandgegevens gerelateerd met het doelgebied in overeenstemming met de verwerkingsgegevens te verwerken om een verwerkt beeldframe te voorzien; via de outputeenheid, het verwerkte beeldbestand te voorzien.to receive, via the Input Unit, a source data set comprising source items; receive, via the input unit, markup data identifying markup items associated with the source data set; via the Input Unit, target data associated with a selected markup item is included by receiving the markup data; receive processing data via the mput unit; generate an image file representing an image frame based on the source data set and the formatting data; in the image frame, identify a target location of an image item associated with the selected source item; based on the target location and the target data, identify a target area in the image frame; data included by processing the image file data related to the target area in accordance with the processing data to provide a processed image frame; via the output unit, to provide the processed image file. 19. Computerprogrammaproduct omvattende computer uitvoerbare code die een computer, wanneer geladen in een geheugen dat bedienbaar verbonden is met een verwerkingseenheid van de computer, een werkwijze laat uitvoeren om: een brongegevensverzameling omvattende bronitems te ontvangen; opmaakgegevens geassocieerd met de brongegevensverzameling te ontvangen; doelgegevens geassocieerd met een geselecteerd bronitem omvat door de brongegevensverzameling te ontvangen; verwerkingsgegevens te ontvangen; een beeldframe te genereren gebaseerd op de brongegevensverzameling en de opmaakgegevens;19. A computer program product comprising computer executable code that, when loaded into memory operably connected to a processing unit of the computer, causes a computer to perform a method of: receiving a source data set comprising source items; receive formatting data associated with the source data set; target data associated with a selected source item included by receiving the source data set; receive processing data; generate an image frame based on the source data set and the formatting data; in het beeldframe, een doellocatie van een beelditem geassocieerd met het geselecteerde bronitem te identificeren; gebaseerd op de doellocatie en de doelgegevens, een doelgebied in het beeldframe te identificeren; het doelgebied in overeenstemming met de verwerkingsgegevens te verwerken om een verwerkt beeldframe te voorzien; het verwerkte beeldframe te voorzien.in the image frame, identify a target location of an image item associated with the selected source item; based on the target location and the target data, identify a target area in the image frame; process the target area in accordance with the processing data to provide a processed image frame; the processed image frame. 20. Niet-vluchtig medium hebbende daarop bewaard computer uitvoerbare code die een computer, wanneer geladen in een geheugen dat bedienbaar verbonden is met een verwerkingseenheid van de computer, een werkwijze laat uitvoeren om: een brongegevensverzameling omvattende bronitems te ontvangen; opmaakgegevens geassocieerd met de brongegevensverzameling te ontvangen; doelgegevens geassocieerd met een geselecteerd bronitem omvat door de brongegevensverzameling te ontvangen; verwerkingsgegevens te ontvangen; een beeldframe te genereren gebaseerd op de brongegevensverzameling en de opmaakgegevens; in het beeldframe, een doellocatie van een beelditem geassocieerd met het geselecteerde bronitem te identificeren; gebaseerd op de doellocatie en de doelgegevens, een doelgebied in het beeldframe te identificeren; het doelgebied in overeenstemming met de verwerkingsgegevens te verwerken om een verwerkt beeldframe te voorzien; het verwerkte beeldframe te voorzien.20. A non-volatile medium having stored thereon computer executable code that, when loaded into a memory operably connected to a processing unit of the computer, causes a computer to perform a method of: receiving a source data set comprising source items; receive formatting data associated with the source data set; target data associated with a selected source item included by receiving the source data set; receive processing data; generate an image frame based on the source data set and the formatting data; in the image frame, identify a target location of an image item associated with the selected source item; based on the target location and the target data, identify a target area in the image frame; process the target area in accordance with the processing data to provide a processed image frame; the processed image frame.
NL2031543A 2022-04-08 2022-04-08 Method and device for processing image data NL2031543B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2031543A NL2031543B1 (en) 2022-04-08 2022-04-08 Method and device for processing image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2031543A NL2031543B1 (en) 2022-04-08 2022-04-08 Method and device for processing image data

Publications (1)

Publication Number Publication Date
NL2031543B1 true NL2031543B1 (en) 2023-11-03

Family

ID=83081956

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2031543A NL2031543B1 (en) 2022-04-08 2022-04-08 Method and device for processing image data

Country Status (1)

Country Link
NL (1) NL2031543B1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213202B1 (en) * 1998-12-11 2007-05-01 Microsoft Corporation Simplified design for HTML
US20130262983A1 (en) * 2012-03-30 2013-10-03 Bmenu As System, method, software arrangement and computer-accessible medium for a generator that automatically identifies regions of interest in electronic documents for transcoding
WO2016138394A1 (en) * 2015-02-26 2016-09-01 Graphiti Inc. Methods and systems for cross-device webpage replication
US20200327188A1 (en) * 2019-04-10 2020-10-15 Microsoft Technology Licensing, Llc Print scaling for digital markup-based documents

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213202B1 (en) * 1998-12-11 2007-05-01 Microsoft Corporation Simplified design for HTML
US20130262983A1 (en) * 2012-03-30 2013-10-03 Bmenu As System, method, software arrangement and computer-accessible medium for a generator that automatically identifies regions of interest in electronic documents for transcoding
WO2016138394A1 (en) * 2015-02-26 2016-09-01 Graphiti Inc. Methods and systems for cross-device webpage replication
US20200327188A1 (en) * 2019-04-10 2020-10-15 Microsoft Technology Licensing, Llc Print scaling for digital markup-based documents

Similar Documents

Publication Publication Date Title
US6799299B1 (en) Method and apparatus for creating stylesheets in a data processing system
US8539342B1 (en) Read-order inference via content sorting
US9047261B2 (en) Document editing method
CN101128826B (en) Presentation method of large objects on small displays
US7272789B2 (en) Method of formatting documents
CA2773152C (en) A method for users to create and edit web page layouts
JP4344693B2 (en) System and method for browser document editing
CN111753500A (en) Method for merging and displaying formatted electronic form and OFD (office file format) and generating catalog
US8214735B2 (en) Structured document processor
US8819545B2 (en) Digital comic editor, method and non-transitory computer-readable medium
US20100172594A1 (en) Data System and Method
US20090249189A1 (en) Enhancing Data in a Screenshot
US11341324B2 (en) Automatic template generation with inbuilt template logic interface
US8952985B2 (en) Digital comic editor, method and non-transitory computer-readable medium
WO1998004978A1 (en) Draw-based editor for web pages
EP0862120A1 (en) Original text generating apparatus and its program storage medium
CN114330233A (en) Method for realizing correlation between electronic form content and file through file bottom
US20140215306A1 (en) In-Context Editing of Output Presentations via Automatic Pattern Detection
US20070061715A1 (en) Methods and systems for providing an editable visual formatting model
CN109656552B (en) Method for automatically converting design drawing into webpage based on box model
US9170991B2 (en) Enhanced visual table editing
IL226027A (en) Bidirectional text checker and method
CN112417338B (en) Page adaptation method, system and equipment
US9619445B1 (en) Conversion of content to formats suitable for digital distributions thereof
US20080163102A1 (en) Object selection in web page authoring