AU2013252158B2 - A method and system of producing an interactive version of a plan or the like - Google Patents

A method and system of producing an interactive version of a plan or the like Download PDF

Info

Publication number
AU2013252158B2
AU2013252158B2 AU2013252158A AU2013252158A AU2013252158B2 AU 2013252158 B2 AU2013252158 B2 AU 2013252158B2 AU 2013252158 A AU2013252158 A AU 2013252158A AU 2013252158 A AU2013252158 A AU 2013252158A AU 2013252158 B2 AU2013252158 B2 AU 2013252158B2
Authority
AU
Australia
Prior art keywords
type
image
plan
interactive
object model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2013252158A
Other versions
AU2013252158A1 (en
Inventor
Daniel Cochard
Caroline Kamhoua Matchuendem
Pierre-Jean REISSMANN
Didier Selles
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amadeus SAS
Original Assignee
Amadeus SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/454,540 external-priority patent/US9105073B2/en
Priority claimed from EP12368011.8A external-priority patent/EP2657883A1/en
Application filed by Amadeus SAS filed Critical Amadeus SAS
Publication of AU2013252158A1 publication Critical patent/AU2013252158A1/en
Application granted granted Critical
Publication of AU2013252158B2 publication Critical patent/AU2013252158B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints

Abstract

A method of producing an interactive plan of a location from an optical image of a plan of the same location, wherein the location includes a plurality of features of different types such as cabins and corridors, the method comprising the steps of applying a complex geometry and character recognition (COGCR) process to the optical image to determine a plurality of functional data representative of the plurality of features of different types; converting the plurality of functional data into a plurality of object models; combining the object models to construct the interactive plan for display to an end user.

Description

WO 2013/159878 PCT/EP2013/001142 1 A method and system of producing an interactive version of a plan or the like. 5 Field of the Invention The present invention relates to a method and system of producing an active version of a plan. The method and system may be used in conjunction with a reservation system or associated method to make a booking. 10 Background of the Invention In many travel related environments plans are provided, for example in a cruise reservation application, although there are plans in many other situations as well. In the present example, reference will be made to a 15 cruise reservation process, although this is not intended to be a limitation, to the scope of the present invention. In the age of the Internet, passengers generally search for cruises by looking at the websites of specific travel providers or cruise ships. The passenger is able to view a deck plan during a selection and can open 20 other windows to access a multimedia website in which a deck plan and photographs of cabins (and sometimes an associated cabin plan) can be viewed. An end user is not able to simultaneously view the available cabins on the image. In addition, the end user is unable to select a particular cabin and 25 there is no interactivity associated with the views and options available in a user interface. The most common way in which the websites sell cruise solutions is as described above. The end user has access to images showing the various decks on the ship. This information is not accessible to a booking WO 2013/159878 PCT/EP2013/001142 2 flow, although it can run in a parallel section of the websites associated with the ship. Alternatively, the booking information can be accessed by the end user in another window or another tab. The end user can further access photographs and videos showing the ship as would be the case for 5 viewing the deck plan. In a website associated with "cruise-deals" the end user can access photographs of the cabins; videos of the cabins; and a table of the cabins. The end user has no access to a deck plan or any information relating to booking. 10 The "croisierenet" and "abcroisiere" websites use similar solutions. The end user is unable to see the location of the cabin of interest in relation to the ship in general. In the case of the "cruise-direct" website, the end user has access to the same information as identified above in the other prior art examples. 15 However, the end user can display booking information in another tab. As with all the other prior art examples, the ship plan is a non-interactive picture. The above mentioned websites generally use the same solutions. None of these solutions offer any interactivity and are not linked to the booking 20 process. As a result users can make choices based on viewed cabin plans, where those cabins are ultimately not available because they have already been booked. This is frustrating and can result in the end user deciding not bother to book anything, leading to loss of revenues for the cruise companies. 25 In a solution designed by "TravTech" for "cruise.com" the end user can view a deck plan and then place the mouse over a cabin to reveal further information, such as a photograph or the cabin category. The deck plan is still image-based and is not easily maintainable if changes are made.
WO 2013/159878 PCT/EP2013/001142 3 The website "cdfcroisieresdefrance.com" discloses an image of the ship rather than a deck plan. An end user can place a mouse over a certain area of the image of the ship in order to display specific information. US 2002/0099576 (MacDonald et al.) discloses a method and system for 5 managing reservations such as cabins on cruise ships on the Internet with an interactive plan. The system provides graphical information and textual information relating to the room. When an end user moves a pointer over a room on the interactive image, there are parts of the image that are "clickable" in the context of an application. The availability of the cabins is 10 regularly updated. The end user can book a cabin by clicking on the cabin in the interactive image. The system uses SmartDecks TM technology to update the representation of the available and non-available cabins on the interactive image. This application has a limited degree of interactivity and fails to provide much of the information customers require when 15 considering which cabin to book. In addition, this application relies on the manual intervention of an administrator to detect cabin positions. The application makes use of a bitmap of the decks and is only able to process cabins. All other symbols and shapes are not able to be processed. As a result, this reservation system only has information about cabin 20 availability, but no information that may be relevant to other aspects, such as handicapped access, connecting cabins, distance to elevators, etc. From the end user point view, the experience is poor, with displays of static images as opposed to interactive representations. In addition, no search facility is provided with the exception of an availability search. 25 Also, from a processing point of view, the manual entry requirement is a limitation. In general, the prior art relates to displaying information relating to areas or the whole of the ship. There are certain prior art references that suggest further information can be accessed by using a mouse or other 4 pointer to view further details of a particular location. The ability of an end user to engage with the prior art systems and ascertain availability and then go on to book a place is not possible. The prior art merely offers one or more layers of information to enable an end user to see some details of a particular cabin or other part of the ship. 5 Any discussion of documents, devices, acts or knowledge in this specification is included to explain the context of the invention. It should not be taken as an admission that any of the material formed part of the prior art base or the common general knowledge in the relevant art on or before the priority date of the claims herein. 10 Summary of the Invention The present invention provides a method, system and computer program product as set out in the accompanying claims. 15 According to one aspect of the present invention there is provided a method of producing an interactive plan from an image including a plurality of features of different types, the features of a first type each including a character representation that identifies the feature within the image, the method comprising: for each feature of the plurality of features, applying, by a processor, a complex geometry and optical recognition process to the 20 feature to produce functional data, the functional data comprising a plurality of structures each including a predetermined combination of one or more shape elements and a predetermined function; for each feature of the plurality of features, converting, by the processor, the functional data into an object model by: generating a vector representation for each of the structures to define the object model; applying an optical character 25 recognition process to identify alphanumeric characters within the feature; comparing the alphanumeric characters with a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response to the alphanumeric characters matching one of the words or numbers, associating the object model with the corresponding character representation to classify 30 the object model as being a first type of object model; linking, by the processor, each of the object models of the first type with a booking flow based on the associated character representation to construct the interactive plan for display to an end user; and configuring, by the processor, each of the object models of the first type to carry out the predetermined function related to the linked booking flow in response to end user 35 interaction with the object model.
5 Optionally, the method comprises displaying the interactive plan and responding to the end user interaction with the object models in the interactive plan to carry out the predetermined functions. 5 Optionally, the plurality of structures includes a first type of structure and a second type of structure, and the method further comprises: applying a first predetermined combination of one or more shape elements to determine the first type of structure and a second predetermined combination of one or more shape elements to determine the second type of structure to produce different types of object models. 10 Optionally, generating the vector representation for the first type of structure produces a first type of vector representation, and generating the vector representation for the second type of structure produces a second type of vector representation. The method then comprises converting each combination including the first type of vector 15 representation to the first type of object model and each combination including the second type of vector representation to a second type of object model. Optionally, the object models comprise symbols on the image, and the method comprises: applying a mask representative of a predetermined symbol over the image; 20 determining locations of one or more symbols on the image which correspond to the mask; and storing the determined locations of the one or more symbols in a database. Optionally, the image includes multiple types of symbols, and applying the mask, determining the locations, and storing the determined locations are performed for each 25 type of symbol. Optionally, the method further comprises: recovering the determined locations from the database; and creating an associated object model to apply to the interactive plan. 30 Optionally, the object models relate to objects in the image, and the method further comprises: defining the objects by one or more topographical rules; locating the objects that match the one or more topographical rules; and storing the location of each object located. 35 Optionally, each object corresponds to one of the structures.
6 Optionally, each object model includes details of a shape of the object, and the method further comprises determining the shapes of the objects by tokenizing the image into predetermined shape elements or color elements. 5 Optionally, the image comprises a plurality of pixels, and tokenizing the image comprises: moving from one pixel to another pixel searching for homogenous pixels which appear to relate to one of the features in the image; analyzing each pixel to determine whether any neighboring pixels form, in combination with the pixel being analyzed, a combination of pixels that comprise a portion of a predetermined shape element or color element; and 10 building a database of shape elements or color elements and locations thereof for use in the interactive plan. Optionally, each object comprises one of a cabin, a cabin with an extra bed available, a cabin with handicapped access, a cabin with an additional foldaway birth, a corridor, a 15 stairwell, a stateroom, a connecting stateroom, a family stateroom, a stateroom with a sofa bed, a woman's restroom, and a men's restroom. Optionally, the method further comprises removing the character representations that fail to match any of the words or numbers in the list. 20 Optionally, the method further comprises classifying some or all of the object models to enable identification of a position of a pixel. Optionally, the shape elements include lines and corners, and generating the vector 25 representation of each of the structures comprises vectorizing the lines and corners to form a set of vectors each indicating a direction and a size of movement from one point in the image to a next point in the image. Optionally, the method further comprises applying rules to control processing of the 30 image to form the interactive plan. Optionally, the method further comprises drawing a layout of the interactive plan and adding the object models to form a representation of the image as a display of the interactive plan. 35 7 Optionally, the method further comprises detecting an end user interaction with the display of the interactive plan; and carrying out an action based on the interaction. Optionally, linking the object models of the first type in the interactive plan to the booking 5 flow based on the associated character representation enables the end user to select an object for booking, and complete the booking if the selected object is available. Optionally, the method of comprises downloading a navigator application into a mobile device, the navigator application configured to cause the mobile device to, in response to 10 receiving a request for directions to a requested location, provide directions to the requested location using the interactive plan. Optionally, a different complex geometry and optical recognition process is applied to each feature of the plurality of features based on the type of the feature to produce a 15 different type of functional data. According to a second aspect of the invention, there is provided a system for producing an interactive plan from an image comprising a plurality of features of different types, the features of a first type each including a character representation that identifies the feature 20 within the image, the system comprising: a processor; and a memory coupled to the processor, the memory including instructions configured to, when executed by the processor, cause the system to: for each feature of the plurality of features, apply a complex geometry and optical recognition process to the feature to produce functional data, the functional data comprising a plurality of structures each including a 25 predetermined combination of one or more shape elements and a predetermined function; for each feature of the plurality of features, convert the functional data into an object model by: generating a vector representation for each of the structures to define the object model; applying an optical character recognition process to identify alphanumeric characters within the feature; comparing the alphanumeric characters with 30 a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response to the alphanumeric characters matching one of the words or numbers, associating the object model with the corresponding character representation to classify the object model as being a first type of object model; link each of the object models of the first type with a 35 booking flow based on the associated character representation to construct the interactive plan for display to an end user; and configure each of the object models of the 8 first type to carry out the predetermined function related to the linked booking flow in response to end user interaction with the object model. Optionally, the instructions are further configured to cause the system to display the interactive plan and respond to the end user interaction with the object models in the 5 interactive plan to carry out the predetermined functions. Optionally, the plurality of structures includes a first type of structure and a second type of structure, and the instructions are further configured to cause the system to: apply a first predetermined combination of one or more shape elements to determine the first 10 type of structure and a second predetermined combination of one or more shape elements to determine the second type of structure to produce different types of object models. Optionally, generating the vector representation for the first type of structure produces a 15 first type of vector representation, and generating the vector representation for the second type of structure produces a second type of vector representation, and the instructions are further configured to cause the system to: convert each combination including the first type of vector representation to the first type of object model and each combination including the second type of vector representation to a second type of object 20 model. Optionally, the system of further comprises: a database, wherein the object models comprise symbols on the image, and the instructions are further configured to cause the system to: apply a mask representative of a predetermined symbol over the image; 25 determine locations of one or more symbols on the image which correspond to the mask; and store the determined locations of the one or more symbols in the database. Optionally, the image includes multiple types of symbols, and the instructions are further configured to cause the system to apply the mask, determine the locations, and store the 30 determined locations for each type of symbol. Optionally, the instructions are further configured to cause the system to recover the determined locations from the database and create an associated object model which is applied to the interactive plan. 35 9 Optionally, the object models relate to objects in the image, and the instructions are further configured to cause the system to: define the objects by one or more topographical rules; locate the objects that match the one or more topographical rules; and store the location of each object located. 5 Optionally, each object corresponds to one of the structures. Optionally, each object model includes details of a shape of the object, and the instructions are further configured to cause the system to: determine the shapes of the 10 objects by tokenizing the image into predetermined shape elements or color elements. Optionally, each object comprises one of a cabin, a cabin with an extra bed available, a cabin with handicapped access, a cabin with an additional foldaway birth, a corridor, a stairwell, a stateroom, a connecting stateroom, a family stateroom, a stateroom with a 15 sofa bed, a woman's restroom, and a men's restroom. Optionally, the instructions are further configured to cause the system to remove the character representations having alphanumeric characters that fail to match any of the words or numbers in the list. 20 Optionally, the shape elements include lines and corners, and the system generates the vector representation of each of the structures by vectorizing the lines and corners to form a set of vectors each indicating a direction and a size of movement from one point in the image to a next point in the image. 25 Optionally, the instructions are further configured to cause the system to apply rules to control processing of the image to form the interactive plan. Optionally, the instructions are further configured to cause the system to draw a layout of 30 the interactive plan and add the object models to form a representation of the image as a display of the interactive plan. Optionally, the instructions are further configured to cause the system to detect an end user interaction with the display of the interactive plan and carry out an action based on 35 the interaction.
10 Optionally, linking the object models of the first type in the interactive plan to the booking flow based on the associated character representation enables the end user to select an object for booking, and complete the booking if the selected object is available. 5 Optionally, the system further comprises a mobile device including an application configured to, in response to receiving a request for directions to a requested location, cause the mobile device to use the interactive plan to provide directions to the requested location. 10 Optionally, the instructions are further configured to cause the system to apply a different complex geometry and optical recognition process to each feature of the plurality of features based on the type of the feature to produce a different type of functional data. According to a third aspect of the invention there is provided a computer program product 15 for producing an interactive plan from an image comprising a plurality of features of different types, the features of a first type each including a character representation that identifies the feature within the image, the computer program product comprising: a non transitory computer readable storage medium; and instructions stored on the non transitory computer readable storage medium, the instructions being configured to, when 20 executed by a processor, cause the processor to: for each feature of the plurality of features, apply a complex geometry and optical recognition process to the feature to produce functional data, the functional data comprising a plurality of structures each including a predetermined combination of one or more shape elements and a predetermined function; for each feature of the plurality of features, convert the functional 25 data into an object model by: generating a vector representation for each of the structures to define the object model; applying an optical character recognition process to identify alphanumeric characters within the feature; comparing the alphanumeric characters with a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response 30 to the alphanumeric characters matching one of the words or numbers, associating the object model with the corresponding character representation to classify the object model as being a first type of object model; link each of the object models of the first type with a booking flow based on the associated character representation to construct the interactive plan for display to an end user; and configure each of the object models of the 35 first type to carry out the predetermined function related to the linked booking flow in response to end user interaction with the object model.
10a The present invention offers a number of advantages. The present invention enables the identification of any kind of functional objects (cabins, elevators, etc) that can be found on an interactive plan. The search results are shown graphically, which makes them more user friendly. There is an opportunity to use new search criteria such as proximity to 5 points of interest, cabin size, etc. The invention further provides a ship navigator, cabin finder and/or 3D plan which may be downloaded onto a personal digital assistant (PDA) to obtain directions to a requested location to enable navigation around the ship. The invention provides automatic data acquisition from any type of plan and is not limited to the environment of ships and cruisers. 10 Other advantages will be evident from the following description. This invention offers a real advantage to any cruise application which uses plans for booking purposes. The invention provides the end user with a graphical interface which is 15 easy to manipulate and provides the administrator of a reservation system with a tool to automatically generate this graphical interface. Also, the invention makes it possible to have new ways of driving the flow of searching, identifying and booking a cabin. An advantage of this solution is that an image can be built, such that all elements can be 20 interactive as opposed to the static image of the past which required substantial processing to offer any form of dynamic functionality. Comprises/comprising and grammatical variations thereof when used in this specification are to be taken to specify the presence of stated features, integers, steps or components 25 or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. Brief Description of the drawings Reference will now be made, by way of example, to the accompanying drawings, in 30 which: Figure 1 is a block diagram of data acquisition process, in accordance with an embodiment of the invention, 35 Figure 2, is a simplified diagram showing pixel positions, in accordance with an embodiment of the invention, 10b Figure 3, is this schematic diagram showing the various steps of generating an interactive deck plan describing the data acquisition process in more detail in accordance with an embodiment of the invention, and 5 Figure 4 is a block diagram of the image building process and system, in accordance with an embodiment of the invention; Figures 5 and 6 are examples of interactive deck plans, in accordance with an 10 embodiment of the invention; and Figure 7 is a diagram showing an alternative interactive plan for a hotel village, in accordance with an embodiment of the invention. 15 Detailed Description of the Invention The present invention relates to a method and system for automatically generating an interactive deck plan and to use it to improve an end user's experience in a booking process. The system generates an interactive WO 2013/159878 PCT/EP2013/001142 11 deck plan in combination with a reservation system. The method creates functional data for building an interactive deck plan in a usable object model. The invention further comprises a process for acquiring data to build the interactive deck plan and a process for displaying the acquired 5 data. In the present description, the word 'plan' relates to representations of buildings, ships, aircraft and the like as well as to representations of geographical sites. The end user can choose to book a specific cabin by selecting an image of 10 a specific cabin on the interactive deck plan. The end user can then finalize the booking process through the reservation system by reference to the cabin number of the selected cabin. The cabin number provides a link to associate the selected image of the cabin on the interactive deck plan and the corresponding cabin reference in the reservation system. 15 The representation of a cabin on the interactive deck plan, in accordance with the present invention, provides additional descriptive features or criteria relating to the cabin. The additional descriptive criteria take into account the environment of the cabin relative to the deck, for example, the external environment of the cabin. The existing reservation systems 20 cannot technically afford to process such criteria for the booking process, due to the amount of processing required. In the present invention, the end user has more available information relating to a cabin before choosing a cabin which best matches the required preferences. Having found out all the information about the cabin in question, booking the cabin 25 can be achieved instantly and directly with the booking system. If the cabin is not available, the graphical interface will prevent the user from selecting it. The present invention relates to a cruise reservation and booking flow but can be applied to any application that might use a plan of resources in a WO 2013/159878 PCT/EP2013/001142 12 reservation process or system. The object of the cruise reservation system is to sell all cabins on a particular cruise. Currently, a travel agent or an end user who uses a cruise booking application has no access to any kind of visual information concerning the ship that can be manipulated 5 and is interactive. The interactive ship plan of the present invention is effectively part of the booking flow, at the cabin selection step. The information is stored as functional data (e.g in XML format) and is linked with information from the booking process, allowing interactions between the application and the 10 interactive deck plan, in a bi-directional manner. Adding an interactive plan directly to the booking flow significantly improves the selection of the cabin. This invention enables an administrator to model a cruise ship or any object that can be modeled by a plan in the framework of a booking 15 process. The framework may be extended to the modeling of any bookable resource that may include a plan. Other possibilities in the process flow may also be proposed, such as the ability to start the reservation process by a cabin selection. The invention includes two main parts. 20 A first part is an automatic method of computing functional data which is used to build an interactive plan. In one embodiment of the invention, this is complex geometry and optical character recognition (CGOCR) of existing deck plans. A second part is an automatic method to display the data derived from the 25 first part as an enhanced interactive interface. These two parts may be applied to the business of cruises and other travel applications, whilst the CGOCR may be applied to provide data as an WO 2013/159878 PCT/EP2013/001142 13 interactive display to feed any type of reservation system content or indeed any application domain where an interactive plan might be useful. The first part of the invention will now be described in greater detail below. The present invention takes a static plan image and converts this 5 automatically into functional data that represents the existing static plan, which can then be presented to the end user as an interactive interface. The invention then uses a data source to build an interactive deck plan image. The original image data is converted into usable object models representing all elements and characteristics of the optical deck plan. 10 There are a number of categories of elements or characteristics, each of which is handled in a different manner, as will be described below. Referring initially to figure 1, a complex geometry and optical character recognition (CGOCR) 100 is shown. The following description will give a brief resume of the various elements of the CGOCR which will then be 15 described in further detail below. The CGOCR shown generally at 100 includes a source of images 102, such as an original optical deck plan or plans of a ship. The CGOCR also includes a processing block 104; which includes a tokenizer 106; a noise reduction filter 108; an adaptive flood fill module 110; and a topological analyzer 112. The noise reduction filter is 20 applied to the image to reduce the noise level in the optical image of the deck plan. The noise level in the image derives from the scanning process of the static plan image. The scanning process may comprise scanning a print out of the static deck plan. This scanning process adds at least two 25 sources of noise from the input image. One source is a result of the optical chain, such as lenses, scanner glass window defects, scratches, stains, light irregularities, etc in the original imaging devices. The other source of noise is due to the "lossy" compression of the format used, for example a Jpeg image may produce an approximation of the WO 2013/159878 PCT/EP2013/001142 14 original bitmap resulting from the scan which is already subject to the optical noise as indicated above. The CGOCR further includes a classifier 114 which carries out shape matching in a shape matcher module 116 and further includes a primitive 5 builder 118 and a vectorizer 120. The shape matcher receives symbols from a database 122 and topological rules from a database 124. The output from the classifier produces a plurality of digital scenes 126. The digital scenes are stored in a database of models 128 to be used by an application 130 as will be described in further detail below. 10 A typical deck plan will include the layout of each deck including the position of various articles. For example the deck plan may show cabins, elevators, storage spaces, details and names of the above mentioned articles, social places such as restaurants, bars or theatres and any other appropriate environment or article. 15 In accordance with the present invention, the objective is to convert these scans of decks into usable functional data. In order to achieve this it is necessary to define the most basic or simple elements of a domain language appropriate to the ship (the object model). In this way, symbols and patterns can be applied to a deck image in order to generate 20 functional data that could be exploited to rebuild the interactive image. In doing this, using the object model, the image is transformed into functional data that is used to create the graphical interface. As a result, subsequent use of a pointer, such as a pointer device click, can be processed. The richness of the object model is also used to correct the result of the low 25 level scanner which may be used when identifying the least identifiable basic or simple elements. As previously mentioned, the present invention determines a set of functional data to store, generate, display and manipulate images which can be applied to the cruise booking flow. This is achieved by use of a WO 2013/159878 PCT/EP2013/001142 15 domain Model View Controller pattern (MVC) applied to a deck plan. The data is processed by a display engine, to draw the plan in a viewer, where the view is interactive. This will be described in greater detail below. The solution of the present invention requires considerably less overhead 5 than the use of several images in the prior art, and the functional data is easily maintainable, for example in XML format. This is particularly the case where there are changes in the ship plan. The fact that the deck plan for the ship is stored as a set of functional data means that it can be manipulated by a computer program, thereby making the functional data 10 more flexible to use. In addition, due to the nature of the functional data more information associated with the deck plan can be displayed when a scaling factor changes, in other words zooming in to display more detail. An advantage of this solution is that a plan can be built, where all elements can be interactive as opposed to the static image of the past 15 which required substantial processing to offer any form of dynamic function. The present invention makes use of JavaScript for the logical part, HTML5 with Canvas objects for the display and XML to store functional data. It will be appreciated however that any other relevant standards or formats 20 may be used. The sequence of identifying the functional data may occur in parallel with multiple processes occurring at the same time; or in series where the processes may be carried out in any particular order. A first part of identifying the functional data is to determine the basic 25 elements, for example symbols, patterns, features or elements which appear on a deck plan. This is needed to generate an exploitable data representation of some or all of the elements of the deck plan and thereby to produce the interactive plan.
WO 2013/159878 PCT/EP2013/001142 16 In the case of a deck plan a number of specific processes will be used in order to generate an exploitable set of functional data to build interactive deck plans. The processes will be identifying entities to enable static plan images to be modeled.- At least the following elements will be analyzed 5 and considered in the construction of functional data. Image processing functions will identify lines, points, curves, corners etc. as basic elements required for drawing the interactive deck plan. Certain topological rules will be assumed whilst identifying functional data, these may include the following: 10 e a cabin will always be a closed shape; " cabins will generally be surrounded by the vessel superstructure (i.e. there can be no "floating cabins" found outside the perimeter of the vessel); " a cabin will include cabin labels which generally follow a regular 15 sequence; " low level iconic basic elements tend to follow a given distribution, for example elevators may always be found at the end of corridors, escape stairs may always be in a similar position, life jackets are always in the same location on each deck, etc. The fact that certain elements follow 20 basic distribution rules can be used to recognize simple elements within more complex parts of the deck plan. The first set of features that will be analyzed is symbols on the deck plan. The deck plan includes a plurality of specific symbols that are generally standardized across the cruise industry, although different cruise lines 25 may have variations on the standards as will be seen below. The symbols may be located at multiple points on the deck plan. A symbol is a specific figure or glyph and a definition is required for each specific symbol so that the processor should be able to identify and detect them. The following two tables show a list of specific symbols for two different cruise lines to WO 2013/159878 PCT/EP2013/001142 17 illustrate the nature and associated description of each symbol. It will be appreciated that many other symbols and associated description may be used beyond those illustrated below. 5 Example 1: t Third or Fourth Bed Available t Third Bed Available Handicapped Access Connecting Staterooms .. . . . .. Ih Stateroom with sofa bed........... # Women's Restroom en's 7 r Restroom + Sttroom with sofa bed and third Pullman bed available Example 2: t Third or Fourth Bed Available *.....Third Bed Available AA2 additional upper foldaway berth. &,-Handicapped Access Nm Sofa bed $1 Connecting Staterooms WO 2013/159878 PCT/EP2013/001142 18 o Sofa bed sleeps two Each of the basic symbols can be referred to as a glyph or pictogram. In order to identify and detect the symbols on the original deck plan the image of the deck plan is processed as follows. A mask exists for each of 5 the known symbols for the specific cruise liner. For each symbol the mask is applied to the plan and a raster scan is carried out over the surface of the plan in order to identify the position of each occurrence of the symbol in question. The coordinates of each occurrence of a particular symbol are identified and stored. This process is repeated for each of the 10 symbols until a detailed representation of the location of all the relevant symbols has been obtained and the coordinates stored. Certain symbols may have associated rules which prevent them from being associated with certain areas of the deck plan. The rules are provided in an appropriate database. For example, if the symbol for a sofa 15 bed is located in a corridor area the system will recognize that this is not possible. As a result the symbol for the sofa bed will be associated with the closest recognized cabin, as a rule for sofa beds insists that it must be located within a cabin. -The rules augment the use of the mask and raster scan in identification of symbols and thus ensure that the functional data 20 associated with a given symbol is not misassociated with an area that is unacceptable in accordance with the rules. The results of the scans WO 2013/159878 PCT/EP2013/001142 19 include the coordinates of each symbol stored in an appropriate database with an associated description. The stored symbol results have become functional data that can be accessed when building the interactive deck plan. The symbols used can come from any suitable location or source. 5 Many other types of symbol can be envisaged. The symbols for a particular ship are generally identified before the functional data is constructed. Symbols are relatively simple entities to detect and identify. The deck plan includes considerably more complex entities and different processes are 10 required to identify these more complex entities. To identify the more complex entities, processes using patterns and topographical rules are used. A pattern is composed of a set of topographical rules. For example, a cabin is defined by three main topographical rules, namely: the cabin is a closed space, the cabin is 15 surrounded by the vessel superstructure and the cabin labels follow a regular sequence. If a number of entities are identified as having the same pattern this also allows identification of entities of a similar or same family. An example of a family of similar entities is cabins. On a deck plan there may be several entities matching the basic pattern of 20 a cabin. As such, so that cabins can be isolated from other similar structures, a specific object structure corresponding to a cabin is created, as shown below: WO 2013/159878 PCT/EP2013/001142 20 aCabininstance { name:1534, o length:40, o width:16, o position:[32,212], 5 o shape:"rect", o cat:"D1", o service:[ {name:" ConnectingStaterooms"}, o {name: "StateroomWithSofaBed"}] The above object structure represents a cabin named 1534 and includes 10 details of the shape, size or other information in a format that can be readily accessed in order to be introduced into the interactive deck plan. A similar object structure will exist for every other cabin on each deck and again will be available to be introduced into the interactive deck plan. The attributes relating to shape and size of a cabin are determined within 15 the tokenizer. The tokenizer looks for shapes, lines, corners and any other significant shape related information that are called shape elements. Cabins are generally expected to be rectilinear, although there are some cabins which take different shapes. However, for the purpose of the next part of the description it is assumed that the cabin is rectilinear. The 20 image of the deck plan is analyzed by the tokenizer. The tokenizer moves from pixel to pixel and finds homogeneous pixels that appear to relate to the features in the deck plan. Each pixel is analyzed to determine whether each or any of its neighbors form, in combination, part of a line or other shape. In this manner each pixel which represents, for example part of 25 cabin, will be examined to identify a shape from one pixel to the next pixel of the same or similar color. When analyzing colors, the test is not for equality but for similarity. Slight differences in color are permitted as long as they are within a given threshold.
WO 2013/159878 PCT/EP2013/001142 21 This will now be explained in greater detail with reference to figure 2. Figure 2 shows a number of identified pixels. The first identified is pixel 200. By scanning all the neighbors of pixel 200, two neighbors are identified. These are pixel 202 to the right of original pixel 200 and 204 5 below the original picture pixel 200. All other pixels surrounding pixel 200 do not appear to form part of a straight line or shape related structure and are given no further analysis at this time. The nearest neighbors of pixel 202 and 204 are then analyzed for their nearest neighbors and gradually a picture is built up of a shape which could represent two walls of a cabin. 10 At pixel 206 a T-junction is formed between the upward line of pixels 208 and a horizontal line to pixels 210. Pixel 200 is a corner pixel. Continuation of the analysis of nearest neighbors of pixels builds up a detailed structure of linear elements in the deck plan. A structure including four corners connected by lines is generally considered to represent a 15 cabin. After a more defined shape has been determined, i.e. a square or rectangle, a space is defined between the shape borders. The defined space may be "in-filled" by the adaptive flood fill module 110 in figure 1. The whole deck plan is analyzed as described above to identify linear features, cabins and other sorts of spaces of particular shapes and sizes 20 and areas to fill-in if possible. Referring to figure 3 the overall generation process is described in more detail. A source image 300 of the original deck plan is obtained and filtered as previously mentioned. In addition, the cruise company provides a list of cabins 302 and the type of symbols 304 (as described above) that 25 they use. Details of the static plan image 300 when processed by the tokenizer are shown as 306 as described above. The tokenizer object recognizes the colors and basic shapes and attributes of the features shown in the source image. An optical character recognition process is applied to the source image in 30 conjunction with the cabin list and symbol types to generate a data object WO 2013/159878 PCT/EP2013/001142 22 308, which includes the symbols and recognizes the cabin numbers of the cabins. The data object 302 (for example the list of cabins) and the OCR object 308 (for example the label) are processed by the classifier (114 in figure 5 1). The classifier will attempt to match the 'OCR objects' coming from the tokenizer to the '302' list of cabins thus removing any false positive results (i.e. spurious labels that may have been created be error). This ensures that only valid cabin labels are used. The classifier makes use of the shape matcher (116 in figure 1) along with 10 the databases for symbols and topographical rules. The classifier processes the tokenizer image to identify features that relate to corridors and other spaces which are not readily classified as cabins by the tokenizer. This could take into account certain topographical rules stored in database 124 of figure 1. The data object produced, shown as 310 in 15 figure 3, indicates the corridors and other spaces that cannot be readily classified. The processing of the optical character recognition object 308 by the classifier produces a detailed representation of the symbols and cabin numbers. The resultant classifier optical character recognition data 20 representation is shown as 312 in figure 3. Again, access to the cabin list and symbol types assist in the process of producing the data object 312. The result of the OCR recognition process may be cross checked with the rules (i.e. cabin numbers should generally be ordered, they should follow a specific regular expression and/or make use of a database of the actual 25 numbers). At the end of the data acquisition phase, an intermediate functional representation of all deck plan images processed by the tokenizer, optical character recognition process and the classifier are obtained.
WO 2013/159878 PCT/EP2013/001142 23 The results of the process of tokenizing and classifying the image as carried out by the tokenizer and classifier are related to the positions of each of the identified pixels. In a further step, as carried out by the vectorizer 314, the pixels and lines identified in the data representations 5 306 and 310 are changed to convert them into points and vectors from one point to the next. The image is shown as 314. In other words, the shape features are represented by coordinates. The vectorizer image shows a number of points one of which is shown as 316, where it is not clear that the line on which the point exists is continuous or not. This is 10 based on anomalies in the process. However, since the line appears to be continuous, the points are ignored and the line is assumed to be continuous. The vectorizer shows the boundary pixels of the shapes. The original curves may be used instead by adding more vectorial segments in the shape descriptor with a result which is an approximate representation 15 of 310. If the classifier fails, the vectorizer may still present a valid boundary pixel and may be used to close the opened vectorial shape. This step is called a topological closure. In order to produce the final functional data of the deck plan the vectorized 20 representation 314 and the classified optical character recognition representation 312 are combined to produce functional data 318. A more detailed representation of the functional data 318 will be shown and described with reference to figures 5 and 6 in a later part of the description. The digital scene 318 shows the cabins and their number 25 from the optical character recognition representation and removes all non cabin related information except for information that is particularly of interest to a potential passenger, such as the position of the elevators and the like.
WO 2013/159878 PCT/EP2013/001142 24 The following example shows some of the information about the format and type of functional data relating to the information about the ship, in this case as an XML file structure. 5 <ship name=" -> <categories > <service namc=" " image=" " text-" "/> <category code=" " description-"" color="" cabinPhotograph=" cabinPlan=" 10 ... </categories> <decks> <deck name=" " layout=" > 15 <cabins> <cabin name="" length"' width="" position="" .shapc="" cat= "> <service niame=" " /> </cabin> </cabins> 20 </deck> </decks> </ship> The legend is represented as follows: <service name=" ThirdBedAvailable" image="cross.png" text="Third 25 Bed Available"/> The legend is used by the cabins, while retrieving information for a service, which is represented by a symbol on the plan. These are now identified in the services listed and may include: additional beds; restrooms; handicapped access; and connecting rooms. Connecting 30 rooms are represented -by a double arrow and the double arrow is an attribute of both cabins. Based on the number and/or the position of two cabins, they can seem to be connected. This is not always the case. However, when using the present invention there is no chance of confusion as the attributes of the two cabins will clearly identify the 35 connecting cabin by means of the double arrow, even if the two cabins do not have consecutive numbers or the two seem to be apart.
WO 2013/159878 PCT/EP2013/001142 25 The categories are defined as follows: <category code="D1" description="" color="#0OO0CC" cabinPhotograph ="cabinPhotograph D1.png" cabinPlan="cabinPlanD1.png"/> The category information is retrieved by the display engine, while drawing 5 the cabins. Graphically, each cabin category has a specific color, but it also has other attributes, for example a private veranda, special services, etc. The decks are represented as follows: <deck name="DECK FIFTEEN" layout="DECK FIFTEEN.png"/> 10 The layout is used as the background image for the deck plan. The layout of the deck plan can be thought of as lines and thus it to can be stored as data in the set of functional data, if necessary. In the example where there are two types of cabins: a rectangle-shaped cabin and a polygonal cabin will be represented differently, as follows: 15 <cabin name="1534" length="40" width="16" position='[32,2121 shape="rect" cat="'D1"> <service namre="doubleArrowBottomn" /> <service niame="trianigle" /> </cabini> 20 The rectangular cabins have five attributes: the name, the shape (rectangular), the length and width (in pixels), and the position of the top left corner (in pixels, from the top left corner of the deck plan). Any special services offered by the cabin are listed as child attributes of the cabin. A polygonal cabin has less attributes: its shape (polygonal), the 25 coordinates of the points counter-clockwise (in pixels, from the top left corner of the layout) and its category, as follows. <cabin name="11292"1 points ="[([206,680] , 254,680] ,[254,696] , 246,696] ,[246,?20] ,[206,?20] ]" shape="poly" cat="o0S"/ WO 2013/159878 PCT/EP2013/001142 26 The above are examples and it will be clear that there are many other functional representations in different situations and circumstances. In different applications there will be also different types of features that are considered and as a result still further functional representations. The 5 different functional data will depend on the specific application in which this invention is used. The above functional data of the invention are represented in XML as previously indicated. The XML is read by a JavaScript program, which translates the logical object models into visual objects, such as cabins or 10 other images to display to the end-user. The functional data can be represented by a canvas element as follows: . <canvas width="300px" height="1520px" id='deckMapImage" ></canvas> The canvas element is an HTML5 element that enables a programmer to create an individualized image. The canvas element can be used to view 15 and modify the image with JavaScript, as described in for example, see the website of w3c: http://dev.w3.orq/html5/canvas-api/canvas-2d-api.html or the Mozilla Developer Network tutorial: https://developer.mozilla.org/en/canvastutorial. The HTML5 canvas is one possible implementation. Other vectorial viewers (such as Flash, etc) may 20 be used instead. The system also includes a display engine which is also a JavaScript file that up-loads the XML and parses it into a Date Object Model (DOM) object. The display engine 400 then loads data from the cruise line 402 (as shown in figure 4) to identify which cabins are available. Afterwards, 25 the display engine draws the layout of the deck, on which are drawn all the cabins of a particular deck and in which all the available cabins are highlighted. The services are then drawn, represented by their symbols WO 2013/159878 PCT/EP2013/001142 27 and a picture of a cabin of the given category and the plan thereof can be displayed in a frame. The display engine 400 in figure 4 will now be described in greater detail. The display engine includes a symbol factory 404 which has access to 5 plans 406 and display rules 408. The display engine further includes a scene composer 410. The display engine receives functional data from a functional data module 412; this functional data has been derived from the CGOCR processing. The symbol factory and the scene composer are linked in terms of their function and object models corresponding to 10 symbols are generated. The display engine is also in communication with a cruise booking flow 414, which provides information relating to availability 416 and potentially additional promotional offers 418. The display engine combines the necessary elements as above described to present to the end user on a viewer 420. 15 The result is one single interactive image representing a deck, with the available cabins being highlighted in a specific way. The interactive image representing the deck can then be used by an end user to determine information relating to the deck, including availability of cabins. When the mouse or a pointer of the end user points at a canvas element in the 20 interactive image, an algorithm compares the position of the mouse to the position of the cabins on the interactive image of the deck plan. If the mouse is pointing at a cabin, the cabin is highlighted. Information relating to this cabin, such as category is displayed and the availability of the cabin is indicated in a predetermined manner (for example, the cabin may show 25 up as a specific color, indicating that it may be booked). The end user can then select the cabin by clicking on the mouse. This results in the cabin being booked by the end user. Further operations can then be carried out such as taking passenger details, payment, etc.
WO 2013/159878 PCT/EP2013/001142 28 Figures 5 and 6 show a representative example of an interactive image of a part of the deck plan, as generated by the present invention. In figure 5 a pointer is pointing at an available cabin and further action in respect of the pointer (i.e. clicking the mouse in the vicinity of the cabin) will result in 5 further information being available or a booking being made. In figure 6 the mouse is pointing to an unavailable cabin, which in this instance is shown by the hand 600. The end user then moves on to attempt to select a different cabin which might be available. The end user does not waste time identifying features of the first selected cabin, as the end user can 10 immediately see the cabin is not available. Based on the system and method described above an interactive deck plan can be generated in a matter of minutes, and the whole ship can be processed in less than an hour. Traditionally, a booking flow is carried out by first selecting a geographical 15 area, a date and one or several Cruise liners. A ship may then be selected along with a fare and a cabin category. This may generate a list of a limited number of cabins to choose from. Using the present invention, once the ship is selected, the end user may navigate the deck plans and select an available cabin based on various criteria before entering any 20 further information. The various criteria used to select the cabin, may include: the position in the ship; cabin category; cabin layout; cabin size; fare; the availability of services and equipments, such as handicapped access, folding beds, connecting rooms etc.; the proximity of restaurants, theatres etc.; group booking requirements, etc. In other words, an end 25 user can go straight to the interactive deck plan, select a cabin and then proceed with further aspects of the booking, such as end user identification, payment etc. In addition, as the cabin is an object, the end user can request the search of the deck plan to be limited to cabins having specific services, such as handicapped access etc.
WO 2013/159878 PCT/EP2013/001142 29 As previously mentioned an interactive image of a plan can be generated for other applications such as restaurants, hotels, trains, and even planes. Figure 7 shows an example of another plan of a hotel village having a number of chalets located at different locations in a specific area. An 5 original plan 700 of a hotel in a national park is shown alongside the elements 702 and 704 which relate to interactive images created by the process according to the present invention. In a similar way to that described above, an end user can use the interactive image to select cabins or view other parts of the hotel. 10 The interactive deck plan offers a huge advantage to both the cruise liners and passengers when booking vacations on a ship. Once an interactive deck plan has been created, it is possible to develop applications based on it. In addition, if the deck plans change, new interactive deck plans can be built easily and quickly. 15 The interactive deck plan could be made available on a number of different resources, such as a screen on a computer, a smart phone or other personal electronic interfaces. Within the domain of mobile devices such as smart phones and tablets, applications could be included which allow the end user to interact with the interactive deck plan when mobile. 20 In addition, the end user may use the interactive plan and application in a combined sense to obtain directions to a location as requested by the end user. Thus, the end user can find certain places within the ship whilst on their voyage. For example the end user may ask for directions to a restaurant from a particular location. The fact that the interactive plan is 25 built from functional data rather than being a flat static image means there is no end to the different types of application that can be used. For example, the end user may identify a golf range, and use the interactive plan to effect booking of a golf session or lesson.
WO 2013/159878 PCT/EP2013/001142 30 A person skilled in the art will understand that some or all of the functional entities as well as the processes themselves may be embodied in software, or one or more software-enabled modules and/or devices, or in any combination thereof. The software may operate on any appropriate 5 computer or other machine and the modules may form an integral part of the computer or machine. The operation of the invention provides a number of transformations, such as the various processes for changing different parts of a still image into interactive objects which can then be combined to generate a complete interactive plan that is equivalent in 10 terms of features to the original plan. The ability for an end user to interact with the interactive plan produces still further transformations in a variety of different manners. The invention can be extended to any kind of appropriate situation as above described. The creation of an interactive image over previously 15 non-interactive image may have a still broader application outside of the travel domain. The ability to convert a plan into a series of objects, in accordance with the present invention could be used in many different applications such as hotels, lodges, stadia, theatres, restaurants and any other situation where the geographical features of an element are 20 important when the end users make a choice. It will be appreciated that this invention may be varied in many different ways and still remain within the intended scope of the invention.

Claims (41)

1. A method of producing an interactive plan from an image including a plurality of features of different types, the features of a first type each including a character representation that identifies the feature within the image, the method comprising: 5 for each feature of the plurality of features, applying, by a processor, a complex geometry and optical recognition process to the feature to produce functional data, the functional data comprising a plurality of structures each including a predetermined combination of one or more shape elements and a predetermined function; for each feature of the plurality of features, converting, by the processor, the 10 functional data into an object model by: generating a vector representation for each of the structures to define the object model; applying an optical character recognition process to identify alphanumeric characters within the feature; 15 comparing the alphanumeric characters with a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response to the alphanumeric characters matching one of the words or numbers, associating the object model with the corresponding character 20 representation to classify the object model as being a first type of object model; linking, by the processor, each of the object models of the first type with a booking flow based on the associated character representation to construct the interactive plan for display to an end user; and configuring, by the processor, each of the object models of the first type to carry 25 out the predetermined function related to the linked booking flow in response to end user interaction with the object model.
2. The method of claim 1 further comprising: displaying the interactive plan and responding to the end user interaction with the object models in the interactive plan to carry out the predetermined functions. 30
3. The method of claim 1 wherein the plurality of structures includes a first type of structure and a second type of structure, and further comprising: applying a first predetermined combination of one or more shape elements to determine the first type of 32 structure and a second predetermined combination of one or more shape elements to determine the second type of structure to produce different types of object models.
4. The method of claim 3 wherein generating the vector representation for the first type of structure produces a first type of vector representation, and generating the vector 5 representation for the second type of structure produces a second type of vector representation, and further comprising: converting each combination including the first type of vector representation to the first type of object model and each combination including the second type of vector representation to a second type of object model.
5. The method of claim 1 wherein the object models comprise symbols on the 10 image, and further comprising: applying a mask representative of a predetermined symbol over the image; determining locations of one or more symbols on the image which correspond to the mask; and storing the determined locations of the one or more symbols in a database.
6. The method of claim 5 wherein the image includes multiple types of symbols, and 15 applying the mask, determining the locations, and storing the determined locations are performed for each type of symbol.
7. The method of claim 5 further comprising: recovering the determined locations from the database; and creating an associated object model to apply to the interactive plan. 20
8. The method of claim 1 wherein the object models relate to objects in the image, and further comprising: defining the objects by one or more topographical rules; locating the objects that match the one or more topographical rules; and storing the location of each object located.
9. The method of claim 8 wherein each object corresponds to one of the structures. 25
10. The method of claim 8 wherein each object model includes details of a shape of the object, and further comprising: determining the shapes of the objects by tokenizing the image into predetermined shape elements or color elements. 33
11. The method of claim 10 wherein the image comprises a plurality of pixels, and tokenizing the image comprises: moving from one pixel to another pixel searching for homogenous pixels which appear to relate to one of the features in the image; analyzing each pixel to determine whether any neighboring pixels form, in combination with the 5 pixel being analyzed, a combination of pixels that comprise a portion of a predetermined shape element or color element; and building a database of shape elements or color elements and locations thereof for use in the interactive plan.
12. The method of claim 8 wherein each object comprises one of a cabin, a cabin with an extra bed available, a cabin with handicapped access, a cabin with an additional 10 foldaway birth, a corridor, a stairwell, a stateroom, a connecting stateroom, a family stateroom, a stateroom with a sofa bed, a woman's restroom, and a men's restroom.
13. The method of claim 1 further comprising: removing the character representations that fail to match any of the words or numbers in the list.
14. The method of claim 1 further comprising: classifying some or all of the object 15 models to enable identification of a position of a pixel.
15. The method of claim 1 wherein the shape elements include lines and corners, and generating the vector representation of each of the structures comprises: vectorizing the lines and corners to form a set of vectors each indicating a direction and a size of movement from one point in the image to a next point in the image. 20
16. The method of claim 1 further comprising: applying rules to control processing of the image to form the interactive plan.
17. The method of claim 1 further comprising: drawing a layout of the interactive plan and adding the object models to form a representation of the image as a display of the interactive plan. 25
18. The method of claim 17 further comprising: detecting an end user interaction with the display of the interactive plan; and carrying out an action based on the interaction.
19. The method of claim 1 wherein linking the object models of the first type in the interactive plan to the booking flow based on the associated character representation 34 enables the end user to select an object for booking, and complete the booking if the selected object is available.
20. The method of claim 1 further comprising: downloading a navigator application into a mobile device, the navigator application configured to cause the mobile device to, 5 in response to receiving a request for directions to a requested location, provide directions to the requested location using the interactive plan.
21. The method of claim 1 wherein a different complex geometry and optical recognition process is applied to each feature of the plurality of features based on the type of the feature to produce a different type of functional data. 10
22. A system for producing an interactive plan from an image comprising a plurality of features of different types, the features of a first type each including a character representation that identifies the feature within the image, the system comprising: a processor; and a memory coupled to the processor, the memory including instructions configured 15 to, when executed by the processor, cause the system to: for each feature of the plurality of features, apply a complex geometry and optical recognition process to the feature to produce functional data, the functional data comprising a plurality of structures each including a predetermined combination of one or more shape elements and a predetermined function; 20 for each feature of the plurality of features, convert the functional data into an object model by: generating a vector representation for each of the structures to define the object model; applying an optical character recognition process to identify alphanumeric 25 characters within the feature; comparing the alphanumeric characters with a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response to the alphanumeric characters matching one of the words or 30 numbers, associating the object model with the corresponding character representation to classify the object model as being a first type of object model; 35 link each of the object models of the first type with a booking flow based on the associated character representation to construct the interactive plan for display to an end user; and configure each of the object models of the first type to carry out the 5 predetermined function related to the linked booking flow in response to end user interaction with the object model.
23. The system of claim 22 wherein the instructions are further configured to cause the system to: display the interactive plan and respond to the end user interaction with the object models in the interactive plan to carry out the predetermined functions. 10
24. The system of claim 22 wherein the plurality of structures includes a first type of structure and a second type of structure, and the instructions are further configured to cause the system to: apply a first predetermined combination of one or more shape elements to determine the first type of structure and a second predetermined combination of one or more shape elements to determine the second type of structure to 15 produce different types of object models.
25. The system of claim 24 wherein generating the vector representation for the first type of structure produces a first type of vector representation, and generating the vector representation for the second type of structure produces a second type of vector representation, and the instructions are further configured to cause the system to: 20 convert each combination including the first type of vector representation to the first type of object model and each combination including the second type of vector representation to a second type of object model.
26. The system of claim 22 further comprising: a database, wherein the object models comprise symbols on the image, and the instructions are further configured to cause the 25 system to: apply a mask representative of a predetermined symbol over the image; determine locations of one or more symbols on the image which correspond to the mask; and store the determined locations of the one or more symbols in the database.
27. The system of claim 26 wherein the image includes multiple types of symbols, and the instructions are further configured to cause the system to apply the mask, 30 determine the locations, and store the determined locations for each type of symbol. 36
28. The system of claim 26 wherein the instructions are further configured to cause the system to: recover the determined locations from the database; and create an associated object model which is applied to the interactive plan.
29. The system of claim 22 wherein the object models relate to objects in the image, 5 and the instructions are further configured to cause the system to: define the objects by one or more topographical rules; locate the objects that match the one or more topographical rules; and store the location of each object located.
30. The system of claim 29 wherein each object corresponds to one of the structures.
31. The system of claim 29 wherein each object model includes details of a shape of 10 the object, and the instructions are further configured to cause the system to: determine the shapes of the objects by tokenizing the image into predetermined shape elements or color elements.
32. The system of claim 29 wherein each object comprises one of a cabin, a cabin with an extra bed available, a cabin with handicapped access, a cabin with an additional 15 foldaway birth, a corridor, a stairwell, a stateroom, a connecting stateroom, a family stateroom, a stateroom with a sofa bed, a woman's restroom, and a men's restroom.
33. The system of claim 22 wherein the instructions are further configured to cause the system to remove the character representations having alphanumeric characters that fail to match any of the words or numbers in the list. 20
34. The system of claim 22 wherein the shape elements include lines and corners, and the system generates the vector representation of each of the structures by: vectorizing the lines and corners to form a set of vectors each indicating a direction and a size of movement from one point in the image to a next point in the image.
35. The system of claim 22 wherein the instructions are further configured to cause 25 the system to: apply rules to control processing of the image to form the interactive plan.
36. The system of claim 22 wherein the instructions are further configured to cause the system to: draw a layout of the interactive plan and add the object models to form a representation of the image as a display of the interactive plan. 37
37. The system of claim 36 wherein the instructions are further configured to cause the system to: detect an end user interaction with the display of the interactive plan; and carry out an action based on the interaction.
38. The system of claim 22 wherein linking the object models of the first type in the 5 interactive plan to the booking flow based on the associated character representation enables the end user to select an object for booking, and complete the booking if the selected object is available.
39. The system of claim 22 further comprising: a mobile device including an application configured to, in response to receiving a request for directions to a requested 10 location, cause the mobile device to use the interactive plan to provide directions to the requested location.
40. The system of claim 22 wherein the instructions are further configured to cause the system to apply a different complex geometry and optical recognition process to each feature of the plurality of features based on the type of the feature to produce a different 15 type of functional data.
41. A computer program product for producing an interactive plan from an image comprising a plurality of features of different types, the features of a first type each including a character representation that identifies the feature within the image, the computer program product comprising: 20 a non-transitory computer readable storage medium; and instructions stored on the non-transitory computer readable storage medium, the instructions being configured to, when executed by a processor, cause the processor to: for each feature of the plurality of features, apply a complex geometry and optical recognition process to the feature to produce functional data, the functional data 25 comprising a plurality of structures each including a predetermined combination of one or more shape elements and a predetermined function; for each feature of the plurality of features, convert the functional data into an object model by: generating a vector representation for each of the structures to define the object 30 model; applying an optical character recognition process to identify alphanumeric characters within the feature; 38 comparing the alphanumeric characters with a list of words or numbers each corresponding to one of the character representations that identifies features of the first type within the image; and in response to the alphanumeric characters matching one of the words or 5 numbers, associating the object model with the corresponding character representation to classify the object model as being a first type of object model; link each of the object models of the first type with a booking flow based on the associated character representation to construct the interactive plan for display to an end user; and 10 configure each of the object models of the first type to carry out the predetermined function related to the linked booking flow in response to end user interaction with the object model. AMADEUS S.A.S. WATERMARK PATENT AND TRADE MARKS ATTORNEYS P39508AU00
AU2013252158A 2012-04-24 2013-04-17 A method and system of producing an interactive version of a plan or the like Active AU2013252158B2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US13/454,540 US9105073B2 (en) 2012-04-24 2012-04-24 Method and system of producing an interactive version of a plan or the like
US13/454,540 2012-04-24
EP12368011.8 2012-04-24
EP12368011.8A EP2657883A1 (en) 2012-04-24 2012-04-24 A method and system of producing an interactive version of a plan or the like
PCT/EP2013/001142 WO2013159878A1 (en) 2012-04-24 2013-04-17 A method and system of producing an interactive version of a plan or the like

Publications (2)

Publication Number Publication Date
AU2013252158A1 AU2013252158A1 (en) 2014-11-13
AU2013252158B2 true AU2013252158B2 (en) 2016-01-07

Family

ID=48141901

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2013252158A Active AU2013252158B2 (en) 2012-04-24 2013-04-17 A method and system of producing an interactive version of a plan or the like

Country Status (7)

Country Link
JP (1) JP5922839B2 (en)
KR (1) KR101673453B1 (en)
CN (1) CN104246792B (en)
AU (1) AU2013252158B2 (en)
CA (1) CA2867077C (en)
IN (1) IN2014DN08055A (en)
WO (1) WO2013159878A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6384007B2 (en) * 2015-01-29 2018-09-05 三菱造船株式会社 Room selection support system, room selection support method and program
US10580207B2 (en) * 2017-11-24 2020-03-03 Frederic Bavastro Augmented reality method and system for design

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987173A (en) * 1995-03-27 1999-11-16 Nippon Steel Corporation Interactive drawing recognition processing method and apparatus thereof

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2684991B2 (en) * 1994-06-14 1997-12-03 日本電気株式会社 Incense board data creation method
US7253731B2 (en) * 2001-01-23 2007-08-07 Raymond Anthony Joao Apparatus and method for providing shipment information
US20020099576A1 (en) 2001-01-22 2002-07-25 Macdonald John A. Managing reservations
US20040054670A1 (en) * 2001-02-07 2004-03-18 Jacob Noff Dynamic object type for information management and real time graphic collaboration
KR20040083178A (en) * 2003-03-21 2004-10-01 삼성전자주식회사 Method and apparatus arranging a plural image
KR101388133B1 (en) * 2007-02-16 2014-04-23 삼성전자주식회사 Method and apparatus for creating a 3D model from 2D photograph image
US8516365B2 (en) * 2007-06-15 2013-08-20 Microsoft Corporation Dynamically laying out images and associated text using pre-defined layouts
JP5016096B2 (en) * 2010-11-02 2012-09-05 ヤフー株式会社 Seat layout creation device, system, method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987173A (en) * 1995-03-27 1999-11-16 Nippon Steel Corporation Interactive drawing recognition processing method and apparatus thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DORI, D. et al., 'From engineering drawings to 3D CAD models: are we ready now?', Computer-Aided Design, 1995, Vol. 27, No. 4, pages 243-254 *
ILLERT, A., 'Automatic Digitization of Large Scale Maps', ACSM-ASPRS Annual Convention, 1991, Vol 6, Auto-Carto 10, pages 113-122 *
KOSTOPOULOS, K. et al., 'Haptic Access to Conventional 2D Maps for the Visually lmpared', Journal of Multimodal User Interfaces, 2007, Vol. 1, No. 2, pages 13-19 *
YEH, T. et al., 'Sikuli: Using GUI Screenshots for Search and Automation', UIST '09 Proceedings of the 22nd annual ACM symposium on User interface software and technology, 2009, pages 183-192 *

Also Published As

Publication number Publication date
CA2867077C (en) 2021-05-18
AU2013252158A1 (en) 2014-11-13
CN104246792A (en) 2014-12-24
WO2013159878A1 (en) 2013-10-31
CN104246792B (en) 2017-10-03
IN2014DN08055A (en) 2015-05-01
CA2867077A1 (en) 2013-10-31
JP5922839B2 (en) 2016-05-24
KR101673453B1 (en) 2016-11-22
JP2015519643A (en) 2015-07-09
KR20150003828A (en) 2015-01-09

Similar Documents

Publication Publication Date Title
Lehtola et al. Digital twin of a city: Review of technology serving city needs
Cecconi et al. Adaptive zooming in web cartography
CN108228183B (en) Front-end interface code generation method and device, electronic equipment and storage medium
US20100328316A1 (en) Generating a Graphic Model of a Geographic Object and Systems Thereof
JP5797419B2 (en) Map information processing apparatus, navigation apparatus, map information processing method, and program
CN116597039B (en) Image generation method and server
KR102004175B1 (en) Apparatus and method for providing three dimensional map
US11270484B2 (en) System and method for semantic segmentation of a source geometry
AU2013252158B2 (en) A method and system of producing an interactive version of a plan or the like
Brodkorb et al. Overview with details for exploring geo-located graphs on maps
CA3032201A1 (en) Geospatial mapping system
US9105073B2 (en) Method and system of producing an interactive version of a plan or the like
EP2657883A1 (en) A method and system of producing an interactive version of a plan or the like
CN115861609A (en) Segmentation labeling method of remote sensing image, electronic device and storage medium
Xiao et al. Basic level scene understanding: From labels to structure and beyond
KR20190079997A (en) Connected drawing link system and method using drawing number cognition
Ablameyko et al. A complete system for interpretation of color maps
Casner A task-analytic approach to the automated design of information graphics
Papadakis et al. From space to place and back again: towards an interface between space and place
Pedrinis et al. Reconstructing 3D building models with the 2D cadastre for semantic enhancement
Nikoohemat Indoor 3D reconstruction of buildings from point clouds
Boulos Principles and techniques of interactive Web cartography and Internet GIS
Macias et al. Driving assistance algorithm for self-driving cars based on semantic segmentation
Ablameyko et al. Interpretation of colour maps. A combination of automatic and interactive techniques
Hossain et al. Building Rich Interior Hazard Maps for Public Safety

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)