WO2002031774A1 - System and method for combining two or more digital picture objects - Google Patents

System and method for combining two or more digital picture objects Download PDF

Info

Publication number
WO2002031774A1
WO2002031774A1 PCT/FI2001/000882 FI0100882W WO0231774A1 WO 2002031774 A1 WO2002031774 A1 WO 2002031774A1 FI 0100882 W FI0100882 W FI 0100882W WO 0231774 A1 WO0231774 A1 WO 0231774A1
Authority
WO
WIPO (PCT)
Prior art keywords
system
image
image object
user
object
Prior art date
Application number
PCT/FI2001/000882
Other languages
French (fr)
Other versions
WO2002031774A8 (en
Inventor
Petri Koskela
Original Assignee
Pro Botnia Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to FI20002249 priority Critical
Priority to FI20002249A priority patent/FI113899B/en
Application filed by Pro Botnia Oy filed Critical Pro Botnia Oy
Publication of WO2002031774A1 publication Critical patent/WO2002031774A1/en
Publication of WO2002031774A8 publication Critical patent/WO2002031774A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Abstract

The invention relates to a method and a system enabling two or more digital images (A, B) to be processed in order to recognize and separate essential details (A', B'). After being separated, the essential details (A', B') are combined into a combined image object (A'B') and the compatibility between the image objects (A'B') and the essential details (A', B') is examined. The system operates in an information network, such as the Internet, and it is used by a WWW browser. The system utilizes pattern recognition enabling the essential image objects (A', B') to be recognized by means of type images (51a) of a model group database (50).

Description

SYSTEM AND METHOD FOR COMBINING TWO OR MORE DIGITAL PICTURE OBIECTS

FIELD OF THE INVENTION

[0001] The invention relates to a system for combining two or more digital image objects into a combined image object on a correct scale in an information network. The invention further relates to a method for combining two or more digital image objects into a combined image object on a correct scale in an information network.

[0002] Such a system and method are utilized in digital image proc- essing when image objects are combined with each other in order to form an integrated whole.

BACKGROUND OF THE INVENTION

[0003] Digital image processing systems are commonly used nowadays. In such systems, a computer is used for manipulating image objects converted into a digital form, such as photographs with different distinct details. An application of the digital image processing systems is a solution enabling different image objects to be combined into an integrated whole.

[0004] One such solution is a fitting system wherein frames of eye- wear spectacles are fitted on a photograph of a user's face. For instance Ray- Ban, a manufacturer of sunglasses, uses such a system on its Internet homepages. The system works in such a manner that the user of the system can send a photograph of his her face to the system. The Ray-Ban network database comprises frames of the Ray-Ban sunglasses that the user of the system is able to fit on the facial photograph sent to the system. [0005] The system accepts a facial image with a predetermined size in order to enable the image to be utilized in the system. The problem with this is that the system is implemented only for trying on sunglasses, and, on the other hand, the user's facial image has to be adapted to a particular scale. In the Ray-Ban service, the service administrator converts the images to the size required by the service. This, however, necessitates a lot of system maintenance work while it still does not enable fitting during a session without delay, which is a substantial drawback of the system.

[0006] Another problem with the system implemented as described is that the system only enables image objects with both predetermined sub- jects and scale to be combined. This means that a service provided by the sys- tern is restricted only to combining image objects meeting the above- mentioned criteria. Each service implemented in this manner thus requires software of its own, databases of its own and an image processing system of its own. [0007] The prior art systems are closed systems arranged to be used exclusively in connection with predetermined image objects, services and products. The systems are closed systems independent of each other, and it is impossible for a user to combine image objects of such systems with each other. [0008] No prior art system has been implemented that would enable two users to send desired image objects on an undetermined scale to a system during a network session and to further fit the image objects together on a commensurable scale.

BRIEF DESCRIPTION OF THE INVENTION [0009] An object of the present invention is to provide an improved system for combining two or more digital image objects with each other on a correct scale such that in a resulting image, the image objects are combined into one integrated whole that can be viewed as a visual presentation e.g. on a display of a computer. This is achieved by the invention comprising a server and storing means in the server for receiving and storing digital image objects and related additional information into the system. The system is characterized in that the server comprises first logic means for determining an essential image object from a digital image entered into the system. The server comprises second logic means for separating the essential image object from the digital image entered into the system. The server comprises third logic means for combining the separated image object and the related additional information such that the image object is provided with necessary measurement information when it is to be combined with another separately selected image object also provided with known measurement information. [0010] A further object of the invention is to provide an improved method for combining two or more digital image objects on a correct scale into one combined image object, the method comprising receiving and storing one or more digital images into the system. Recognizing and confirming an essential image object from the digital image entered into the system. Separating said essential image object from the digital image entered into the system. Recognizing and confirming a model group of the image object. The model group of the image object is selected using pattern recognition for recognizing the image object, or a user selects the model group from terminal equipment. [0011] Preferred embodiments of the invention are disclosed in the dependent claims.

[0012] An object of the invention is to provide a system for combining digital image objects which is not customized for one product exclusively but which enables different image objects to be fitted together irrespective of time and place, on condition that a logical factual connection, i.e. a face and spectacles, exists between the image objects.

[0013] The system of the invention for combining digital image objects preferably operates in a common information network, such as the World Wide Web, also known as the Internet. Preferably, as its terminal equipment, the system of the invention employs telecommunication terminals connected to an information network, such as a microcomputer connected to an information network by interface devices, e.g. a modem or the like, and equipped with an Internet browser, or corresponding terminal equipment operating in an information network.

[0014] The system of the invention comprises combining image ob- jects already existing in image object databases at location addresses managed by an address database of the system, image objects entered into the system during use or combinations thereof. The combining process utilizes a model group database. The model group database then comprises information on the logical factual connections of the model groups to other model groups. When an image object is entered to the system, the system examines type images of the model groups. On the basis of this examination, it can be decided to which group a particular image object belongs and to which image objects of the model groups the particular image object or its area can be attached. [0015] Preferably, an image object can be combined with another selected image object of the system on a correct scale such that the second logic means comprise a model group database comprising a set of model groups comprising type images of image objects that belong to each group, and a selection logic for recognizing and confirming the model group of an im- age object. The model group database is provided with definitions for the logical factual connections of the model groups to other model groups and addi- tional information to enable the factual connection of the image objects to be implemented on a correct scale in a combined image object. The additional information is measurement information on the parts of the image object. The logical factual connections to other model groups are implemented by provid- ing the type images of the model group with information indicating which part of the image object can be provided with measurement information and to which model group or model groups this measurement information part has a logical factual connection. In addition, a default location of the image object having a logical factual connection is indicated in the type images of the model group. The user can change the location of the image object by means of conventional image processing procedures. For instance, the distance between the pupils is an example of additional information relating to a facial image. On the other hand, the width of the temples is an example of additional information relating to eyewear frames. A logical factual connection, which is implemented using the measurement information 'distance between pupils and width of frames', is defined between the part of the facial image called 'pupils' of the model group and the model group called 'eyewear frames'. On the basis of the above measurement information, the system converts the image objects defined by users to the same scale and aligns them with respect to each other to the point of the image area that has been defined as the default location of this logical factual connection.

[0016] Each model group comprises a specification called 'feature' to define special functions associated with a particular model group. A model group and a single image object may have zero, one, or several 'feature' speci- fications. 'Colour' is a common feature among the model groups, used for determining that the colours of the image objects in a particular group can be given different shades. Model-group-specifically or image-object-specifically, the colour shades can be restricted to certain colours exclusively. Another common feature is a feature called a 'sub-area'. When a model group has been provided with the feature 'sub-area', the system automatically asks the user to indicate the operating area by a pointer. For instance, a sub-area specification 'cheek' is used for adjusting the blusher. The feature determined for the cheek area is 'sub-area'. When the user, using the pointer, indicates the location of the image object provided with the feature 'sub-area', the system produces a menu displaying the names of the model groups to which the particular location has a logical factual connection. When the user selects one of the model groups shown in the menu, the system guides the user to indicate the area to which the user wishes the image object of the selected model group to be placed.

[0017] The 'feature' specification is also used for determining the default location of the image object in a model group when the image object is combined with another image object. The most common default locations are specifications called 'background', 'front' and 'parallel'. For instance, a room has a 'background' feature, whereby a room image object entered into the system is, as default, placed as a background image to enable e.g. furniture to be fitted thereon. The default specifications can be changed by means of conventional image processing procedures.

[0018] On the basis of the model group procedure, it can be decided to which group a particular image object belongs and to which model group or model groups the image object has a logical factual connection. In order to enable the image object to be attached to another image object, the measurement information on at least one image object area is required. The measurement information on the image object area may be measurement information on the outer edges of the image object, e.g. the width and height of a couch, or measurement information on a single sub-area of the image object, such as the width and height of a sofa cushion. This image object area provided with measurement information can be determined to have a logical factual connection to another model group. For instance, information provided on a facial image includes the length of the distance between the pupils, which enables frames whose width is known to the system to be fitted on a correct scale. The system is able to conclude the height of the frames on the basis of the given width measurement information. The default location of the frame image object is determined by indicating the default location in the type images. For the image area, a default location is determined in each logical factual connection. An exception is an image area provided with the feature "sub- area' and indicated or defined by the user only after selecting the image object to be combined.

[0019] The essential image objects can be fitted only if the information structure comprises a description of a logical factual connection between the model groups of the image objects. [0020] The system comprises an address database for maintaining the location identifiers, i.e. addresses, of the image object databases and model group databases in the information network.

[0021] The system further comprises a measurement unit database comprising measurement units and their coefficients compared to a reference unit. According to an embodiment, the additional information on the image object can be determined by feeding measurement information corresponding to a particular point in the image object into a corresponding column in a feed form and by selecting, using a browser, a measurement unit corresponding to the measurement information. According to a second embodiment, the point in the image object corresponding to the fed measurement information can be determined by indicating the starting and ending point of the particular point, e.g. the pupils of the eyes, on the screen by a pointer. According to a third embodiment, the system is able to interpret the measurement information on the basis of earlier measurement information or on the basis of the contents of the image object. For instance, the image object may comprise the necessary measurement information shown in numbers, and the system recognizes the necessary measurement information directly from the image object.

[0022] The user can thus feed the measurement information e.g. in inches and the system stores the measurement information on a default scale determined in the model group of the image object. When the user, using his or her browser, opens the image object, he or she is shown the measurement units the user has selected in his or her personal terminal equipment settings or browser settings. If the user registers in a user database, the user can store his or her default measurement units in the user database.

[0023] Depending on the purpose for which the image object is intended, the image object is defined as a public or a private image object when being stored. A public image object is available to all users in the system. Such an image object is preferably an image object representing e.g. a product in a product range of a company, such as a store. The public image object is to be given at least one piece of measurement information, represented in the type image of the model group of the image object. For instance, a logical factual connection exists between a facial image and a frame group; the system is provided with a description of this factual connection as a relation between the distance between the pupils and the width of the frames. A private image object, in turn, is an image object to be used by a limited group of users only. Such a user group is limited by sending user keys to the users in the user group of this image object to enable the image object to be used in the system. The user keys to both image objects are then needed to enable two private image objects to be used, and the image objects are combined by combining the user keys of the image objects. The measurements of at least the image area used for combining the image object with another private image object or a public image object have to be included in the additional information disclosed. If desired by the user, the additional information on the private image object can be fed only when the private image object is being combined with another image object. Or the additional information on the private image object can also be stored when the user stores the image object in the system.

[0024] A substantial advantage of such a system and method is that the system operates in a common information network, such as the Internet, so the system is available to all those willing to use it. The fitting then becomes a procedure performed in an information network, independent of time and place. Consequently, the system can then preferably be used by a user e.g. for trying on eyewear frames provided by an optician, independently of time and place, which means that the user is advantageously able to try on the frames e.g. at home at night, undisturbed. Secondly, the system is an open system which enables different image objects to be combined, meaning that extremely different image objects can be combined, not just spectacles and facial images, but the system enables the user e.g. to try which kind of wheel rims suits the user's car or which kind of furniture suits the user's living-room or kitchen. [0025] The database structure of the system is scalable, which means that a single image object in the system may comprise several image objects managed by the system.

[0026] A further advantage of the system is that the system can be used by any terminal equipment connected to an information network capable of operating on the protocol used in the information network. Preferably, the system is used by a person who or a company which sends one or more digital images to the system. The system can be used for processing and manipulating image objects in the system, thereby enabling public image objects entered into the system and such image objects already existing in the system as well as private image objects protected by a user key entered into the system to be utilized. The image object can then be stored in the address system of the sys- tem or the image object can be produced for fitting for the duration of such a fitting session exclusively.

BRIEF DESCRIPTION OF THE DRAWINGS

[0027] In the following, the invention will be described in closer de- tail by means of the accompanying drawings, in which

[0028] Figure 1 is a general view of a system for combining two or more digital image objects into a combined image object, showing an information network and structural parts of the system connected thereto,

[0029] Figure 2A is a block diagram showing an example of opera- tion steps of the system when two image objects are combined,

[0030] Figures 2B to 2E show an example of operation steps of the system when two image objects are combined,

[0031] Figure 3 shows a database structure of the system, [0032] Figure 4A shows an example of type images of a model group database of the system,

[0033] Figure 4B shows an example of a type image, [0034] Figure 4C shows an example of logical factual connections contained in the type image,

[0035] Figure 5A is a block diagram showing a method for combin- ing two or more digital image objects into a combined image object,

[0036] Figure 5B shows both a series of pictures and a block diagram showing an example of the method for combining two or more digital image objects into a combined image object,

[0037] Figure 6 shows a step of selecting a model group of the im- age object of the method of Figures 5A and 5B as a process graph and a block diagram when the image object entered into the system is fitted on a public image object,

[0038] Figure 7 is a block diagram showing how a private digital image object is entered into the system, [0039] Figure 8 is a block diagram showing how a public digital image object is entered into the system,

[0040] Figure 9 is a block diagram showing how a user's personal digital image object is combined with another private image object,

[0041] Figure 10 is a block diagram showing how public digital im- age objects are browsed and combined, [0042] Figure 11 is a block diagram showing how a private image object is combined with a public image object,

[0043] Figure 12A shows a digital image,

[0044] Figure 12B shows an essential image object recognized by the system and confirmed by the user,

[0045] Figure 12C shows a logical factual connection between digital images,

[0046] Figure 12D shows measurement information describing the logical factual connection, [0047] Figure 12E shows a combined image object,

[0048] Figure 13A shows how a default location is determined, [0049] Figure 13B shows a location according to the default location, and

[0050] Figure 13C shows a correction made by the user to the com- bined image object.

DESCRIPTION OF PREFERRED EMBODIMENTS

[0051] Referring to Figures 1 to 13, an embodiment of the structure and operation of a typical system according to the invention will be described. The system comprises a server computer 10 connected to an information net- work 11 , the server computer 10, in turn, comprising an image object database 40, a model group database 50, an address database 60, a user database 70, a measurement unit database 80 and a feature database 90. By means of interface equipment, the server computer 10 is connected to the information network 11 , preferably to the Internet. It is obvious as such that the information network 11 may be any prior art information network 11. The structure of the image object database 40 can be a distributed one, in which case, according to Figure 3, there may be image object databases 40 distributed all around the information network 11 at several different points. The address database 60 connected to the server computer 10 then comprises address information on the location of the image object databases 40 in the information network 11.

[0052] According to Figure 1 , user terminal equipment, such as microcomputers 2, mobile telephones 3 or the like, is also connected to the system, the terminal equipment communicating through the information network 11 with the server computer 10 connected to the information network 11 by means of telecommunication devices 1 , such as a modem 1 or a mobile tele- phone 3 or the like. It is also possible to connect a camera 4, such as a digital camera, to such user terminal equipment. It is also conceivable that an image object is produced by taking a picture by a conventional camera, and then converting the picture into a digital form by a scanner. [0053] Figure 3 shows the structure of the databases in association with the server computer 10. According to the figure, the model group database 50 may comprise several sub-databases. Similarly, the address database 60 may comprise a main database and several sub-databases. The user database 70 has a similar structure. All these databases are arranged to communi- cate with the server computer 10 whose software comprises logic means for managing the information in the databases. The model group database 50 used by the system comprises model groups 51 comprising the type images of the group. Figure 4B shows an example of a type image 51 a. Marked in the type image 51 a, Figure 4C shows e.g. areas 511a to 516a that can be pro- vided with measurement information and/or the model group to which a particular area has a logical factual connection. The type images comprise measurable image areas, the logical factual connection of the measurable image area to other model groups, and the feature specifications of the image area. For instance, the type image area 511a illustrates a hair area, and it is pro- vided with the feature 'sub-area', which has a connection e.g. to a model group called 'hair colour'. Image areas 512a and 513a illustrate eyebrows, which is a 'sub-area' feature relating e.g. to a model group called 'colour of eyebrow pencil'. The figure shows image areas 514a and 515a to be measured, the image area 514a illustrating the height of an ear in centimetres; the ear has a connec- tion e.g. to a model group called 'earring' from this area. An image area 515a, in turn, illustrates an image area to be measured called 'distance between pupils', which can be connected to the model group 'frames'. An image area 516a illustrates the lips and it is provided with the feature 'sub-area' and connected to a group called 'lipstick'. The model group database 50 maintains the meas- urement unit of the type images 51a. The image areas whose measurement information is to be entered are stored in the type images of the model group database. In addition, these image areas to be measured are provided with definitions for a logical factual connection to other model groups and the location of the image area into which the image object of the model group having a logical factual connection is to be positioned. The model group is provided with a definition for a default procedure to be performed on the image object when the image object is recognized and confirmed. According to a preferred embodiment of the system, the system proposes a recognized image object to the user, who confirms or rejects the proposed image object. The image object database 40 may be located at the same address as the model group data- base 50, or at any server in the information network 11 that has a recognizable address in the information network.

[0054] Preferably, an image object can be combined to another selected image object in the system on a correct scale such that the second logic means comprise a model group database comprising a set of model groups comprising type images of the image objects that belong to each group and a selection logic for recognizing and confirming the model group of an image object. According to Figures 12A to 12E, the model group database is provided with definitions for the logical factual connections of the model groups to other model groups and additional information enabling a factual connection of the image objects to be implemented on a correct scale in a combined image object. The additional information is measurement information on the parts of the image object. The logical factual connections to other model groups are implemented using the information in the type images of the model group indicating the part of the image object which can be provided with measurement in- formation and to which model group or model groups this measurement information part has a logical factual connection. In addition, the default location of the image object having a logical factual connection is indicated in the type images of the model group, according to Figures 13A to 13C. The user can change the location of the image object by employing ordinary image process- ing methods. The distance between the pupils, for instance, is one piece of additional information relating to a facial image. On the other hand, the width of frames is one piece of additional information relating to the frames. A logical factual connection, which is implemented using the measurement information on the distance between the pupils and the width of the frames, is defined be- tween the facial image part 'pupils' of a model group and the 'eyewear frames' model group. On the basis of the above-mentioned measurement information, the system converts the image objects defined by the users to the same scale and aligns the image objects with respect to each other to the point in the image area determined for the particular logical factual connection as the default location. [0055] Each model group comprises a specification called 'feature' to define special functions associated with a particular model group. A model group and a single image object may have zero, one, or several 'feature' specifications. 'Colour1 is a common feature among the model groups, used for de- termining that the colours of the image objects in a particular group can be given different shades. Model-group-specifically or image-object-specifically, the colour shades can be restricted to certain colours exclusively. Another common feature is a feature called a 'sub-area'. When a model group has been provided with the feature 'sub-area', the system automatically asks the user to indicate the operating area by a pointer. For instance, a sub-area specification 'cheek' is used for adjusting the blusher. The feature determined for the cheek area is 'sub-area'. When the user, using the pointer, indicates the location of the image object with the feature 'sub-area1, the system produces a menu displaying the names of the model groups to which the particular loca- tion has a logical factual connection. When the user selects one of the model groups shown in the menu, the system guides the user to indicate the area to which the user wishes the image object of the selected model group to be placed.

[0056] The 'feature' specification is also used for determining the default location of the image object in a model group when the image object is being combined with another image object. The most common default locations are specifications called 'background', 'front' and 'parallel'. For instance, a room has a 'background' feature, whereby a room image object entered into the system is, as default, placed as a background image to enable e.g. furni- ture to be fitted thereon. The default specifications can be changed by means of conventional image processing procedures.

[0057] On the basis of the model group procedure, it can be decided to which group a particular image object belongs to and to which model group or groups the image object has a logical factual connection. In order to enable the image object to be attached to another image object, the measurement information on at least one image object area is required. The measurement information on the image object area may be measurement information on the outer edges of the image object, e.g. the width and height of a couch, or measurement information on a single sub-area of the image object, such as the width and height of a sofa cushion. This image object area provided with measurement information can be provided with a definition for a logical factual connection to another model group. For instance, information provided on a facial image includes the length of the distance between the pupils, which enables frames whose width is known to the system to be fitted on a correct scale. The system is able to conclude the height of the frames on the basis of the given width measurement information. The default location of the frame image object is determined by indicating the default location in the type images. For the image area, a default location is determined in each logical factual connection. An exception is an image area with the feature 'sub-area', indicated or restricted by the user only after selecting the image object to be combined.

[0058] According to Figure 6, the model group database 50 comprises several model group levels. According to Figure 4A, a group level illustrates one or more type images to which the system compares an image entered into the system. Furthermore, the type images of a model group are pro- vided with additional information (measurements) and areas that are needed if the image object of the particular group is used for fitting together with the image objects belonging to model groups having a logical factual connection. New models can be added to the model group database 50 as necessary, mainly by the system administrator. [0059] The type images 51 a used in pattern recognition may be located at the same network address as the model group database 50, or at any server connected to the information network 11 that has a recognizable network address. An example of a model group 51 in the model group database

50 is a wheel rim group; in the type images of the wheel rim group, the diame- ter of a wheel rim is defined as the measurement information.

[0060] According to Figure 4A, the model group database 50 uses the type images 51a for recognizing the image object groups. The model group

51 further comprises a link to addresses at which the image objects of the group are located. This link is used when the user browses the image objects of the group. The model group 51 is recognized from the image object by software or alternatively, if recognition by software is not possible, by the user. The recognition by software is based on the pattern recognition technology, e.g. on an OCR technology known per se. In the system, a group comprises one or more type images, and the software examines the type images group by group when a new image object is entered into the system. The image objects may also comprise colours or colour scales that are utilized by the recognition soft- ware. An image object may comprise a body of text, interpreted by the system as the necessary measurement information.

[0061] Definitions for model groups 51 that are not subjected to pattern recognition can be also provided in the system. The user then selects from the existing model groups 51. An example of such a model group 51 is a landscape image. The system also maintains information on the model groups 51 most frequently used and searches the model groups 51 in order of popularity. If the user defines an image object as belonging to the model group 51 without using automated recognition, information on the image object remains in the system and a main user can add the image object as a type image to the model group database 50 by software. In the exemplary case, the wheel rim group is provided with a logical factual connection to a car group.

[0062] The logical factual connection of the image areas to be measured of the type images of the model group database 50 to other model groups determines how the model group 51 can be used. If the type images of a model group called 'facial image' comprise e.g. image areas 'moustache', 'beard', 'eyebrow', 'cheek', 'nose', and 'hair', the user may access the system and select the desired one from among the facial image patterns already existing in the system. Image objects from the model groups 'moustache1, 'beard', 'eyebrow', 'cheek', 'nose' and 'hair' can then be fitted on this face, the image objects thus forming an integrated whole desired by the user. When the user stores the image, it can be defined as public or private. If the image object is stored as private, the user is given a user key. The user key is image-object- specific, so it enables an image object protected by the particular user key to be used. If the user enters his or her own facial image to the system, the user has to provide the measurement information on at least the model group whose public image objects the user wishes to try with his or her facial image. For instance the face has a logical factual connection to a group called 'body'. A body and a face are combined on the basis of the thickness of the neck. If the facial image is then stored as a public image object to be fitted on different bodies, the 'thickness of the neck' measurement information is to be provided.

[0063] The user database 70 is used for managing the owners of the image objects and, if necessary, for restricting the use of the system. Restrictions can be made by requiring a user identifier for the entire system or for the image objects of a particular group. One user may have one or more image objects in the system. If the user desires, he or she can register in the user database, in which case by logging in as a registered user, he or she is allowed to use all image objects available for him or her in the system.

[0064] The address database 60 is used for managing the physical storage locations of the image objects entered into the system and the period of time for which the image objects are to be used in the system. The system comprises a function for the system administrator to combine the address information on image objects belonging to different model groups illustrating similar subjects or objects. For instance a side picture and a front picture of a car belong to different model groups. When the image object has links, the browser device of the user displays a menu enabling linked information to be produced without delay. This enables e.g. a search for side and front pictures of a car to be carried out in a shorter period of time. The system administrator is informed of the links in connection with the image object or through information otherwise provided by the sender of the image object. The image objects can be located at the same address as the address database 60 or at any server 10 connected to the information network that has a working network address. The address database 60 used by the system comprises time information and information on the network addresses of the image objects entered into the system. The time information comprises the creation time and the storage time. The address database 60 itself may also be stored.

[0065] Some of the image objects are permanently and publicly available for the users of the system. Some of the image objects, in turn, are image objects that are fitted on another image object only once. Such image objects include e.g. the users' own image objects entered into the system to be fitted. If the users enter image objects into the system that do not have a logical factual connection with each other, the system is incapable of combining the image objects on a correct scale. The user can then combine the image objects with each other or change their places in ways enabled by an image processing program known per se. These image objects are not to scale. [0066] Figures 2A to 2E and 5A and 5B show, by means of an example, the operation of a method necessary for combining two or more digital image objects A, B into a combined image object A', B'. The method operates according to the following method steps to be described, implemented by computer software comprising first logic means 20 in a server 10. According to steps 20, 1 0, an essential image object A', B' is then determined from a digital image A, B entered into a system according to step 13 or 100. According to steps 20, 120, 510, second logic means 25 in the server 10 separate the essential image object A', B' from the digital image A, B entered into the system, after which, according to steps 30, 150, 540, third logic means 30 in the server 10 combine the essential image objects A', B' on a correct scale into a com- bined image object A'B'.

[0067] Referring to Figure 5A exclusively, the method preferably comprises recognizing, whereby first execution step 100 comprises receiving and storing two or more digital images A, B into the system. Second step 110 comprises recognizing and confirming the essential image object A', B', i.e. the part the user is interested in, from the digital image entered into the system. Next, step 120 comprises separating the essential, i.e. the interesting, image object A', B' from the digital image A, B entered into the system. Fourth step 130 comprises recognizing and confirming the model group 51 of the essential image object A', B'. Fifth step 140 comprises selecting one of the model groups to which the image object entered into the system has a logical factual connection, and providing the measurement information on the area in which the image object entered into the system and the selected model group are in relation to each other. Sixth step 150 comprises combining the essential image object A' with another selected image object B' in the system in order to gener- ate an image object A'B'.

[0068] Using Figures 6, 7, 8, 9, 10 and 11 , the operation of the method can be examined in closer detail according to the following model. Processually, the method can be described using the following steps:

[0069] According to execution steps 710, 810, the image objects A, B are received and stored into the memory of a server computer. Next, according to steps 720, 820, the essential image object A', B' of the image object A, B is determined. This means that the essential image object A', B' is the part of the image which is to be combined with another image object. After this has been conducted, the essential image object A', B' is recognized, separated and confirmed from the image object according to steps 730, 830. Preferably, this takes place interactively between the user of the system and the pattern recognition program. The system suggests an essential image object A', B' and next, the user either accepts or rejects the image object A', B' suggested by the system. If the user does not accept the image object A', B' suggested by the system, it asks the user to restrict the image object e.g. by a pointer. After the essential image object has been separated, the model group of the image object is recognized and confirmed according to steps 740, 840, 1020. The system then examines all type images of the model group and suggests the model group closest to the pattern. This is thus carried out by utilizing pattern recognition, by comparing the essential image object to the type images 51a stored in the system and, on the basis of these type images, by recognizing the model group of the essential image object A', B'. If the system comprises a model group to which the image object fits, the process continues. If no such model group can be found, the system asks the user to select the model group. After the model group has been found, the model group whose image object is used for combining the image object with the user's image object is selected from among the model groups having a logical factual connection to the model group of the image object. If two private image objects are combined, the system checks whether the model groups of these image objects have a factual connection with each other according to steps 940, 1050 and if so, the system, according to steps 950, 1160, asks for the measurements of the areas wherein the image areas of the type images of the model groups have been combined with each other.

[0070] In the special embodiment according to Figures 6, 7, 10, 11 , when the image object is e.g. a car, the pattern of a car is recognized from a car image object A by means of pattern recognition, according to execution steps 730, 830. According to step 610, the system recognizes that the image object belongs to the model group 50 'car', and the user, according to steps 740, 840, 1020, confirms the group. In step 620, the system produces the menus of the groups 51 to which the car group has a determined logical factual connection. By steps 630, 1120, the user selects e.g. the wheel rim group from among these groups.

[0071] The system is provided with descriptions of the logical factual connection between the car group and the wheel rim group by means of information concerning the diameter of a wheel rim. The system asks for the meas- urement information on the diameter of the wheel rim of the image object A in the image area. The measurement information can be determined automatically by the system if one measurement has already been provided on the image object A' and a second measurement is in relation thereto (e.g. wheel rim diameter and radius). If the system is not able to determine the measurement information automatically, it is determined interactively with the user. This is followed by selecting, according to steps 1040, 1130, the wheel rim image ob- ject B with which the essential image object A' is to be combined. The second essential image object B' can be selected from the original image object in a corresponding manner to that described above, or the original image object B may also be used as the image object to be combined, in which case the origi- nal image object B is defined as the essential image object B'. Next, the system displays the combined image object A'B' on a screen of the terminal equipment; the combined image object A'B' can then be stored or printed.

[0072] Figures 7, 8, 9, 10 and 11 show different embodiments of the system of the examples. If the combined image object A'B' is to be stored, the software asks the user to select the role of the image object according to execution steps 750, 860, 1070, 1180. The role may be either public or private. If the role is public, according to step 1030, the image remains available for use and view for all users in the system. The public image object retains the measurement information that has not changed compared to the original image ob- ject, only the changes in the measurement information caused by the combining process have to be determined.

[0073] If the selected role is private, according to steps 760, 910, a user key, which according to steps 920, 1110 enables the user to use the image object in the system, is sent to the user. No measurement information ac- cording to step 850 on the private image object has to be stored while storing the image object. According to steps 950, 1160, the necessary measurement information on the private image object can be provided when the image object is being combined with another image object according to steps 960, 1170.

[0074] The above-described process of combining image objects in the system can also be carried out as cooperation between separate connections established to the system from different sources, i.e. Internet sessions. In such a case, two users simultaneously establish a connection to the system, and they send their own image objects to the system to be entirely or partly combined with each other. This requires that during the session, the users in- tercommunicate and exchange the user keys enabling access at least to each other's the image objects according to step 930.

[0075] In the following, a group structure according to the model group database 50 will be disclosed. A group of paintings serves as an example. The group 'paintings' has a logical factual connection to the group 'frames'. The system is provided with a description of the factual connection by means of measurement information on the width of a painting and measurement in- formation on the width of frames. The user is able to try on the frame models defined as public in the system. The features given to the frames are 'material' and 'colour', the area being the entire frame area. The user is able to try the materials defined in the 'materials' group. A material is provided with the fea- ture 'colour'. A frame has a logical factual connection to the 'wall' group. The system is provided with a description of the logical factual connection by means of measurement information on the width of a wall.

[0076] On account of the above grouping, the user may take a picture of the painting and try the different frame options around the painting. Next, the user takes a picture of his or her room, and an image object comprising a wall, a painting, frames and frame material and frame colour is arranged on a wall of this room. By providing the system with one dimension of the image object, the system is able to calculate other dimensions comparable to this dimension. [0077] Such a system and method enable a solution according to

Figures 7, 8, 9 and 10, which is an image manipulation system operating in an information network, such as the Internet. According to execution steps 710, 810, from the digital image sent to the system, the system recognizes the essential image object according to steps 720, 820 and combines the image ob- ject to another image object in the system selected by the user on a correct scale. The user or several users then see one image combined into an integrated whole. The system in its entirety thus comprises software, which is server software, an image object database 40, a model group database 50, an address database 60, a user database 70, a measurement unit database 80 and a feature database 90. The program operates on a WWW server, to which, according to step 1010, a connection is established from a data transfer device equipped with an Internet browser. Execution step 1010 enables an image material to be entered into the system either on a direct connection through the Internet browser or, alternatively, by e-mail, an ftp and a physical data device, such as a compact disc, diskette, magnetic tape or the like. The system then processes the image object as determined by the above-disclosed method, after which the image object can be utilized for image manipulation either on a permanent basis by storing the image object in the system, or only once during a particular WWW session without storage. The model group comprises type images provided with marked-off image areas which, according to step 1060, enable the measurement information to be stored therein and the system to be provided with descriptions of the logical factual connections according to steps 940, 1050, 1150. The type images in the model group are also provided with an indication of the default location of the image object to be combined with the image. Each factual connection determined for a particular image area can be provided with a definition for a default location. The address database maintains the storing addresses and directories of the image objects in the system. The user database is used for determining user rights as regards the use of different image objects. The measurement unit database is used for converting the given measurements into compatible ones, irrespec- tively of a given measurement unit. The system thus operates in an information network device-independently, which means that each data transfer device enabling a connection to be established to the information network and comprising an Internet browser is suitable to be connected to the system.

[0078] A user of the system refers to a person or a company send- ing one or more image objects to the system. The image object is provided with the feature 'public' or 'private'. The feature 'public' or 'private' determines the way in which the image object can be utilized by other users in the system. A user key is applied to private image objects of the system to enable the private image objects to be utilized by the users. The system is used employing ordinary methods known per se from Internet browsers. In other words, the system can be used on Internet homepages, through a portal or a hyperlink on a homepage of a company utilizing the system, whereby a server computer in the system is also put to use.

[0079] If the system is utilized through company homepages, as far as the image objects of the particular company are concerned, the system is arranged to only enable the fitting of the company's image objects entered into the system. This limitation is carried out by a limiting method known per se, using network addresses or other such network identifiers to enable the limitation to be actuated. Alternatively, it is also possible to use a registration proce- dure or a log-in procedure with a user identifier and a password.

[0080] By means of three examples referring to Figures 1 to 13, it will be shown n the following how the system can be applied as seen from the point of view of a user of the system.

[0081] Using his or her digital camera, user X, while on a business trip in Paris, takes a picture of wheel rims that might suit his or her car. User X conveys a digital image A from the digital camera to his or her mobile tele- phone equipped with an Internet browser and establishes an Internet connection to a system for combining digital image objects, sending the image to the system. User X defines the wheel rim image object A as private and the system gives user X a user key to the image object. During the session, user X informs user Y in Finland that he or she would like a picture of his or her car to be sent to the system for combining image objects in order to be able to try some attractive wheel rims on his or her car. The user accesses the system by his or her home computer connected to the Internet and sends an image of user X's car from the memory of the computer to the system. User Y defines a car image object B as private and the system gives user Y a user key to the image object. User Y transmits the user key to the car image object B to user X as a message. By combining the user keys, user X is now able to process both images A and B in the system according to the steps of the above-described method, fit the suitable wheel rims A' to his or her car B' and examine the compatibility therebetween in a combined image object A'B'. Furthermore, both users are able to participate in processing the image object on condition that user Y is also provided with the user key to the wheel rim image object sent to the system by user X.

[0082] Using a digital camera, user X, while on a business trip in Paris, takes a picture of wheel rims that might suit his or her own car. User X conveys the digital image A from the digital camera to his or her mobile telephone equipped with an Internet browser and establishes an Internet connection to the system for combining image objects, sending the image to the system. User X defines the wheel rim image object A as private and the system gives user X a user key to the image object. User X defines a week as the storage time, for which time user X is charged for the use of the service. Once at home after returning from the business trip, using a digital camera, user X takes a picture of his or her car in order to be able to try some attractive wheel rims on his or her car. Using his or her home computer connected to the Inter- net, user X accesses the system and sends the image he or she took of the car, which was stored in the memory of the computer, to the system for combining image objects. User X defines the car image object B as private and the system gives user X a user key to the image object. By combining the user keys, user X is now able to process both images A and B in the system ac- cording to the steps of the above-described method and fit the attractive wheel rims A' on his or her car B' and examine the compatibility therebetween in a combined image object A'B'.

[0083] Sohva Oy, a furniture company, is a contract client of the system for combining image objects; the company sends images of new couch furniture for public fitting and measurement information thereof to the system. Since the images are public image objects, they are available for fitting to all users in the system. The system separates the essential image objects A', i.e. couch furniture, from the image objects A, and provides the image objects with measurement information, i.e. height and width, which is supplied in a file to- gether with the images. The image object information A' is stored in the image object databases of the system. Using a digital camera, user X takes a picture of his or her living-room B, with which he or she wishes to fit the couch furniture provided in the product range of Sohva Oy available for fitting in the system for combining image objects. The system does not recognize the essential image object B', so the user defines the image object as belonging to the group 'living-room'. The default procedure of the group 'living-room' is to show the model groups (to which a logical factual connection exists), one of the model groups being 'couch'. User X selects the group 'couch' and the system asks for the width and height of the living-room. These being provided, the sys- tem sets the living-room B' as the background image B', and the system asks the user to select a couch A' from the product range provided by Sohva Oy to be placed on the background image. When the couch A' is being fitted on the living-room image B', the location of the couch can be varied using a pointer of the terminal equipment, e.g. a mouse. After an appropriate combination of an image object A'B' comprising the living-room and the couch has been found, it can be stored and the process may continue by fitting new image objects, such as lamps, on the image object.

[0084] It is to be understood that the above description and the related figures are only intended to illustrate the present invention. The invention is thus not restricted only to the above-described embodiment nor to the embodiment defined by the claims but it will be obvious to one skilled in the art that the invention can be varied and modified in many different ways feasible within the scope of the inventive idea disclosed in the attached claims.

Claims

1. A method for combining two or more digital image objects (A', B') into a combined image object (A'B'), comprising a server (10), storing means (13) in the server (10) for receiving and storing digital images (A, B) into the system, c h a r a c t e r i z e d in that the system comprises first logic means (20) arranged to recognize and confirm an essential image object (A', B') from a digital image (A, B) en- tered into the system, the first logic means being arranged to separate the essential image object (A', B') from the digital image (A, B) entered into the system, the system comprises at least one model group database (50) comprising a set of model groups (51 ) and type images (51a) therein preferably comprising one or more image areas enabling the essential image objects (A1, B') to be combined on a correct scale, the system being arranged to convey information on the image areas to the essential image object (A', B'), the system comprises second logic means arranged to compare the essential image object to the type images in the model group database and to select the type image recognized by the system as the closest one to the essential image object (A', B'), and from this type image (51a), to attach the functional image areas of the type image with their measurement information requirements to the essential image object (A1, B'), the system comprises third logic means (30) arranged to combine at least two essential image objects (A', B') with each other on a correct scale into a combined image object (A'B') on the basis of given information or information determined by the type images.
2. A system as claimed in claim 1 , c h a r a c t e r i z e d in that a logical connection to other model groups (51 ) is defined in the image areas of the essential image objects (A1, B') in the database structure.
3. A system as claimed in claim 2, c h a r a c t e r i z e d in that the type images (51a) take into account the placatory image area and the measurement information required for the image area and/or to which model groups this measurement information part has a logical factual connection.
4. A system as claimed in claim 1, characterized in that the model group is associated with features and/or default procedures stored in a feature database (90) comprising information on the default procedures arranged to be carried out when an image object associated with the model group is entered into the system. Each model group can be provided with definitions for unique features and/or default procedures of its own.
5. A system as claimed in claim ^characterized in that the system comprises an image object database (40) comprising essential image objects.
6. A system as claimed in claim 5, characterized in that the system comprises an address database (60) comprising an organized list of location identifiers, whereby the address database (60) comprises the location identifiers of the image object databases (40), feature databases (90) and model group databases (50) in the system.
7. A system as claimed in any one of claims 1 to 6, characterize d in that the system comprises a measurement unit database (80) comprising a conversion logic for different measurement unit systems, the system being arranged to scale measurements in different measurement units according to a selected measurement unit system.
8. A system as claimed in any one of claims 1 to 7, characterize d in that the system is arranged to operate in a common information network (11).
9. A system as claimed in any one of claims 1 to 8, characterize d in that the system is arranged to be used by terminal equipment (2, 3) connected to the information network.
10. A system as claimed in any one of claims 5 to 9, character i z e d in that the system is arranged to combine image objects in the image object databases (40) at location addresses managed by the address database (60) of the system, image objects entered into the system during use or combinations thereof.
11. A system as claimed in any one of claims 1 to 10, c h a r a c - t e r i z e d in that the system is arranged to allow access to two or more users to use the system simultaneously and to process each other's image objects.
12. A method for combining two or more digital image objects (A1, B') into a combined image object (A'B'), characterized in that the method comprises the following steps: receiving one or more digital images (A, B) recognizing and confirming an essential image object from the digital image (A, B) entered into a system, separating said essential image object (A1, B') from the digital image (A, B) entered into the system, recognizing and confirming a model group of said essential image object (A1, B'), and attaching to the essential image object (A, B') the functional image areas including their measurement information of a type image that comes closest thereto, combining said essential image objects (A', B') into a combined image object (A'B') on a correct scale on the basis of the given measurement information.
13. A method as claimed in claim 12, characterized in that the model group of the image object is recognized using pattern recognition.
14. A method as claimed in claim 12 or 13, c h a racte r ize d in that the image objects (A, A', B, B', A'B') in the system are defined as public or private image objects.
15. A method as claimed in claims 12 to 14, characterized in that a private image object is provided with a user key to enable the image object to be used in the system.
16. A method as claimed in claim 15, characterized in that, a smart card, fingerprint recognition or other such commonly known protection method is utilized as a protection method.
17. A method as claimed in claim 15 or 16, characterized in that the system combines two or more private image objects with each other on the basis of the user keys provided by a user or users.
PCT/FI2001/000882 2000-10-12 2001-10-11 System and method for combining two or more digital picture objects WO2002031774A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20002249 2000-10-12
FI20002249A FI113899B (en) 2000-10-12 2000-10-12 A system and method for combining two or more digital image object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU1058302A AU1058302A (en) 2000-10-12 2001-10-11 System and method for combining two or more digital picture objects

Publications (2)

Publication Number Publication Date
WO2002031774A1 true WO2002031774A1 (en) 2002-04-18
WO2002031774A8 WO2002031774A8 (en) 2004-04-22

Family

ID=8559281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2001/000882 WO2002031774A1 (en) 2000-10-12 2001-10-11 System and method for combining two or more digital picture objects

Country Status (3)

Country Link
AU (1) AU1058302A (en)
FI (1) FI113899B (en)
WO (1) WO2002031774A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5375195A (en) * 1992-06-29 1994-12-20 Johnston; Victor S. Method and apparatus for generating composites of human faces
EP0704822A2 (en) * 1994-09-30 1996-04-03 Istituto Trentino Di Cultura A method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images
US5542037A (en) * 1992-08-24 1996-07-30 Casio Computer Co., Ltd. Image displaying apparatus wherein selected stored image data is combined and the combined image data is displayed
US5986671A (en) * 1997-04-10 1999-11-16 Eastman Kodak Company Method of combining two digitally generated images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5375195A (en) * 1992-06-29 1994-12-20 Johnston; Victor S. Method and apparatus for generating composites of human faces
US5542037A (en) * 1992-08-24 1996-07-30 Casio Computer Co., Ltd. Image displaying apparatus wherein selected stored image data is combined and the combined image data is displayed
EP0704822A2 (en) * 1994-09-30 1996-04-03 Istituto Trentino Di Cultura A method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images
US5764790A (en) * 1994-09-30 1998-06-09 Istituto Trentino Di Cultura Method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images
US5986671A (en) * 1997-04-10 1999-11-16 Eastman Kodak Company Method of combining two digitally generated images

Also Published As

Publication number Publication date
FI20002249A (en) 2002-06-17
WO2002031774A8 (en) 2004-04-22
FI113899B1 (en)
FI20002249D0 (en)
FI20002249A0 (en) 2000-10-12
FI113899B (en) 2004-06-30
AU1058302A (en) 2002-04-22

Similar Documents

Publication Publication Date Title
US7203367B2 (en) Indexing, storage and retrieval of digital images
US6847383B2 (en) System and method for accurately displaying superimposed images
US8306872B2 (en) Search supporting system, search supporting method and search supporting program
AU695272B1 (en) Method and system for software development and software design evaluation server
US7479956B2 (en) Method of virtual garment fitting, selection, and processing
US6052122A (en) Method and apparatus for matching registered profiles
US6658410B1 (en) System and method for intermediating information
US7711611B2 (en) Wish list
JP3543395B2 (en) Services provided and how to use
CN102216941B (en) A method and system for handling content
US7216092B1 (en) Intelligent personalization system and method
US7444354B2 (en) Method and apparatus for storing images, method and apparatus for instructing image filing, image storing system, method and apparatus for image evaluation, and programs therefor
US7548874B2 (en) System and method for group advertisement optimization
EP0823809B1 (en) Universal directory service
JP4583181B2 (en) Service providing device and a service providing method of a user center
US6334109B1 (en) Distributed personalized advertisement system and method
US9430780B2 (en) Communication service method and communication apparatus thereof
CN105637512B (en) The method used to create customized products and systems
US20030050815A1 (en) System for purchasing geographically distinctive items via a communications network
CN201780605U (en) Clothes fitting system
JP4196336B2 (en) Image print system using a peer-to-peer network
US20030063778A1 (en) Method and apparatus for generating models of individuals
JP4856353B2 (en) Makeup display / sales system and method
US7620270B2 (en) Method for creating and using affective information in a digital imaging system
CN101164083B (en) Album generating apparatus, album generating method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ CZ DE DE DK DK DM DZ EC EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

CFP Corrected version of a pamphlet front page
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
CFP Corrected version of a pamphlet front page
CR1 Correction of entry in section i

Free format text: IN PCT GAZETTE 16/2002 DUE TO A TECHNICAL PROBLEMAT THE TIME OF INTERNATIONAL PUBLICATION, SOME INFORMATION WAS MISSING UNDER (81). THE MISSING INFORMATION NOW APPEARS IN THE CORRECTED VERSION

NENP Non-entry into the national phase in:

Ref country code: JP