WO2003105466A1 - Method of imaging an object and mobile imaging device - Google Patents

Method of imaging an object and mobile imaging device Download PDF

Info

Publication number
WO2003105466A1
WO2003105466A1 PCT/IB2003/002330 IB0302330W WO03105466A1 WO 2003105466 A1 WO2003105466 A1 WO 2003105466A1 IB 0302330 W IB0302330 W IB 0302330W WO 03105466 A1 WO03105466 A1 WO 03105466A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
taken
imaging
view
Prior art date
Application number
PCT/IB2003/002330
Other languages
French (fr)
Inventor
Jyh-Kuen Horng
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to AU2003233097A priority Critical patent/AU2003233097A1/en
Publication of WO2003105466A1 publication Critical patent/WO2003105466A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Definitions

  • the invention relates to a method of imaging an object by means of a mobile imaging device, wherein at least a first and a second image of the object are taken for different views. Further, the invention relates to a mobile imaging device for imaging an object, wherein at least a first and a second image of the object are taken for different views.
  • Contemporary digital cameras provide a variety of improvements such as the improved quality of the optical or image sensoring system or other improvements such as auto-focusing and metering systems.
  • improvements such as the improved quality of the optical or image sensoring system or other improvements such as auto-focusing and metering systems.
  • One such attractive function is the support of panoramic image capturing.
  • conventional imaging devices may provide several additional controls or information, which allow the user to capture and post-process the panoramic view more easily and efficiently.
  • a user has to take multiple images of the panorama from several particular viewpoints instead of just one.
  • Such a panoramic mode of conventional devices has also to be used to capture three-dimensional objects such as a car, a sculpture or a vase. In this case, multiple images of the 3D-object have to be taken from different view-points.
  • panoramic imaging devices are usually handled as follows: If the user selects a panoramic capturing mode, the white balance and exposure value will be fixed and about 40 % of the previous picture will be left on the view finder to allow the user to identify the position of the next picture to be taken. The user then moves and roughly aligns the left partial image with the view currently on the viewfinder. The 40 % overlap usually provides sufficient indications to accomplish this manual registration process.
  • the sequence of captured images is stitched together in a subsequent offline procedure to compose the total panoramic image.
  • offline panoramic software may be used.
  • this approach is not very well suited to capturing static three-dimensional objects such as cars, statues and human beings.
  • the reasons for this inadequacy of the panoramic approach are as follows: 1. In the case of panoramic, capturing users usually do not more when taking shots of surroundings. Instead in the case of object capturing they focus on the object they are interested in, walk around it and take pictures of it from different view-points. These pictures should cover as many different portions of the object as possible. Unlike in a panoramic mode, the trajectory of the camera will not be always linear.
  • the camera will at least cover two-dimensional space in order to obtain not only horizontal views but also views of the top and bottom. 2.
  • the panoramic approach available nowadays is a fully manual process. One has to appropriately position the camera in order to ensure that the object to be imaged is matched with the remaining 40 % from a previous image.
  • the necessary approach is much more complicated for object mode capturing. It is usually unreasonable to ask the user to both identify suitable viewing positions of a subsequent shot and to remember what parts of information he may have gathered.
  • panoramic capturing devices of prior art do not usually provide any control or assist in indicating unnecessary images or insufficient imaging of a panorama or an object.
  • the help functions of an imaging device in a panoramic mode are restricted and focused on a linear camera trajectory and are generally a fully manual process to secure the matching of subsequent images taken.
  • the object of which is to provide a method and an apparatus for conveniently imaging all kinds of objects, such as a panorama or a three-dimensional object, and in which the process of capturing an image sequence of such an object is simplified for the user.
  • this object is achieved by a method of imaging an object by means of a mobile imaging device, wherein at least a first and a second image of the object are taken for different views of the object, the method comprising the steps of:
  • the method further comprises the steps of processing the first and further image data automatically by the device to generate an output
  • the object is achieved by a mobile imaging device for imaging an object, wherein at least a first and a second image of the object are taken to obtain different views of the object, the mobile imaging device comprising:
  • - a means selecting the object for imaging, - a means taking the first image from a first view and capturing at least one further image from a different view,
  • a sensoring means specifying first image data and further image data
  • a certain position may be common to some object views. Further object views may also result from differing positions or view-points, whereby from each position the object is envisaged in a different direction.
  • Capturing an image comprises any kind of catching or reproduction of an image of the object during the process of viewing the object by means of the mobile imaging device. “Capturing” comprises e.g. transmitting an image of the object in a viewfinder of a camera, the image of which may be examined by a user on a display.
  • Taking an image comprises any kind of capturing of an image and additionally any kind of registering, storing or recording of an image.
  • “Taking” comprises e.g. recording of an image on a storage device contained by the imaging device for a later development, display or processing of the image, or any other use of the image after the process of viewing the object is accomplished.
  • a display or a viewfinder may be used to select the object.
  • a lens system or any other optical system may be used in combination with an optical sensor to view and capture images.
  • Any kind of sensor such as an imaging sensor and/or position and/or motion indicating sensor may be used to gather image data.
  • Such a sensoring means may also be part of a shutter or a viewfinder, which will also be described in the detailed description.
  • the proposed invention has arisen from the desire to add an intelligent indication system to a digital camera to provide the user with direct help in taking a sequence of images of an object.
  • the main concept of the proposed invention is to adapt the intelligent indication system, in such a way that it is well-suited to capturing and taking images of all kind of objects, in particular by panoramic image sequences and also by three-dimensional image sequences. Whereas in the case of a panoramic image sequence the imaging device is moved along a more or less linear trajectory or at least within a substantial flat two- dimensional surface, in the case of a three-dimensional image sequence the imaging device is moved within three dimensions in an object-centered way to capture object-centered scenes. It was realized that the above-mentioned intelligent indication system could be performed by tracking the motion between a current view and the last image captured by the user. "Motion" in particular is an effect of changing view-points of a user.
  • Image data are preferably already specified automatically.
  • Processor means are suitable for further processing of data and indicating relevance.
  • an intelligent indication system is realized in particular by automatic processing of at least the first and further image data by the device to generate an output and by using the output to indicate a relevance of the further image.
  • Image data in particular comprise data of the view or perspective from which the image is taken.
  • the term "view” comprises all kind of information such as device and object position, or parameters deduced from those positions, such as the distance between the device and the object for instance.
  • the direction from which the object is imaged or captured may form part of the data, such as the view angle for instance. Parameters of the object itself, such as the size of the object, may also be comprised by the image data.
  • processing comprises all kinds of calculating or storing image data as indicated above.
  • processing comprises the processing of absolute object and device parameters for each individual view taken alone and also relative parameters such as parameters indicating a change between different views.
  • relative parameters are also referred to by the term "motion".
  • the further image may be taken as the second image in case the further image is considered as relevant and the further image data may be assigned as the second image data.
  • image data are updated to be used as first image data for a subsequent step.
  • still further images may be captured and added to the sequence of images by taking a further image as a second image.
  • the method advantageously comprises the step of indicating, on the basis of the updated image data, a further different view from which a further image can and should be taken in a subsequent step.
  • Such an indication of a new viewpoint may be given to the user e.g. on a viewfinder display.
  • a user may choose predetermined imaging requirements from a variety of available imaging modes, e.g. a panoramic mode or a 3D-mode. Depending on such choice of a user it can be indicated whether the further image is relevant to be taken as the second image on the basis of the output and with respect to the predetermined image requirements.
  • predetermined imaging requirements e.g. a panoramic mode or a 3D-mode.
  • a particular developed configuration of the method may comprise the following four steps:
  • the first image data at least can be specified by means of a frame on a viewfinder.
  • an object size could be indicated by e.g. a 3D-box.
  • the first image is taken.
  • a further image is captured, either from the same view e.g. at another angle or from a different point of view.
  • the first and further image data are subsequently processed according to a change between the first image data of the first image and the further image data of the further image.
  • Such a step accounts for the motion when changing from a first view to a second view, e.g. by changing an angle of view or a viewpoint or both.
  • An image may be taken, preferably in an additional step, if the change exceeds a predetermined threshold.
  • a threshold may be set depending on the above-mentioned requirements and the image data.
  • the processing of image data may also comprise the supply of a coverage rate of a first and a further image.
  • An image can be taken automatically or the user may receive an indication that an image should be taken from a particular viewpoint.
  • Such a camera calculates and conveys moving requirements and quantities to the user and therefore indicates all the necessary steps for taking a sequence of images so that the user just has to follow the comments given by the camera to complete the sequence of images necessary to image the whole object, either in a panoramic or a three-dimensional view.
  • a preferred embodiment 1 of the proposed method is outlined in the following with regard to Figure 1.
  • a computer program product may also be comprised by the preferred embodiment of the proposed concept, e.g. a camera system or image processing system may comprise such a computer program product.
  • Such a system includes in particular three kinds of modules: some auxiliary utility instruments 1 A, a motion estimator IB and a coverage calculator lC.
  • the flow of processing and interactions among the modules in Figure 1 is indicated by arrows.
  • step 1 A the user 10 specifies the object size with the aid of an auxiliary utility instrument and makes the first shot.
  • the auxiliary utility instrument may be a box-shaped frame 4, which helps the user 10 to specify the approximate object size, and which is utilized to compute the coverage rate of the object 3.
  • step IB the motion estimator starts its operation. The motion estimator retrieves the approximate moving distance and direction between consecutive images. Such motion offsets are supplied by the coverage calculator lC.
  • the motion estimator computes and accumulates the motion parameters including directions and quantities according to the changes of scene and decides whether the present view is relevant or not. If the motion between a current view and the last image captured exceeds a specified threshold, the current view is relevant and the camera will take a shot directly or the user 10 will be notified that he needs to make a decision.
  • box 2 of the Figure 1 shows a sequence of images 2.1, 2.2, 2.3, 2.4 and 2.5 taken from respective views and covering the object O.
  • the coverage calculator 1C is informed by the motion estimator IB that it needs to update the status of captured images 2. Subsequently the coverage calculator 1C may also be arranged to display the subsequent direction on a viewfinder 1 A or a LCD-display to the user. Further useful information, such as coverage rate or view-point positions (2.1 - 2.5), could also be shown on the viewfinder 1 A for reference.
  • a user 10 and a shutter IE are indicated schematically.
  • Each of these four modules provides specific commands or information to other modules and updates their own status or performs specific operations while receiving commands or information from others. Details of the functionality and responsibility are as follows:
  • the user module 10 represents the user itself.
  • the only action the user has to take is to determine whether or not to take the scene shown on the viewfinder 1 A as an image when he is informed by the camera. This action is indicated by an arrow pointing from the user 10 to the motion estimator IB.
  • a shutter module IE is provided.
  • the term "shutter” is used to represent the whole image capturing mechanism within a digital camera. This includes the lens, the CCD array, the shutter and the electronics controlling the behavior of the camera.
  • the shutter module receives commands from a user 10 and the motion estimator module IB. When the taking of a shot is requested, the shutter is released and the current scene is taken as an image. A command is then sent to the coverage calculator 1C to update the coverage information.
  • the shutter of the preferred embodiment also provides some internal communication channels 5 specifically adapted to communicate between the modules 1 A, IB, IC, IE and 10 as indicated by arrows 5 in Figure 1.
  • the major functionality of the coverage calculator IC is to compute the coverage rate of the object 3 according to the object size specified at the beginning by aid of 1 A and 4 and the motion offsets are computed by the motion estimator IB.
  • the shutter finishes capturing a picture it sends an update command to the coverage calculator IC.
  • the coverage calculator IC requests the motion offsets from the motion estimator IB.
  • the direction which should preferably be followed i.e. the one which has been figured out by analysis of modules IB and IC is advantageously sent to the shutter and displayed on the view finder to advise the user 10.
  • the motion estimator IB performs a frame by frame comparison to retrieve the motion in between. While the camera is moving this module runs a real-time motion estimation for every frame. For positions between each of the positions of the images 2.1 - 2.5 of the image sequence of object O, the motion of the camera is accumulated and if the motion exceeds a preset threshold, the critical frame is recorded directly or a message is displayed that the user should be notified, to indicate that a further image is relevant to be supplied to the whole picture.
  • the motion estimator IB may also deal with the query from the coverage calculator IC and provide requested motion information, i.e. indicating only those relevant frames which have to be kept. This functionality is also referred to as key- frame selection.
  • Further preferred embodiments of the proposed concept may comprise modules to perform data collection work following off-line processing 6, such as modules for 3D-model generation or object viewing and image stitching.
  • off-line processing 6 such as modules for 3D-model generation or object viewing and image stitching.
  • the images should be taken from different viewpoints. These images can be used as the source of a 3D-browser or a 3D-model creator to obtain an enhanced 3D-experience.
  • the user has to walk around the object and take all the pictures necessary for a sufficient 3D-view.
  • the difficulty of making use of conventional imaging devices is that a user has to manually specify any unnecessary pictures and manually has to decide on his own whether he has taken enough pictures.
  • the proposed concept comprises an automatic means of motion estimation to determine the motion distance and the direction between images and an automatic means for covering calculation to specify a rough object size and calculate whether an object is sufficiently covered to guarantee a sufficient view of the object.

Abstract

In order to capture the outward appearance of an object and represent multiple images on the object at a later stage, the images have to be taken from different view-points. These images can be used as the source of a 3D-browser or a 3D-model creator to obtain an enhanced 3D-experience. The user has to walk around the object and take all the pictures necessary for a sufficient 3D-view. The difficulty of making use of conventional imaging devices is that a user has to manually specify any unnecessary pictures and manually has to decide on his own whether he has taken enough pictures. The invention proposes a concept which comprises an automatic means for motion estimation to determine the motion distance and the direction between images and an automatic means for coverage calculation to specify an approximate size of an object, and to calculate whether the object is sufficiently covered to guarantee a sufficient view of the object.

Description

Method of imaging an object and mobile imaging device
The invention relates to a method of imaging an object by means of a mobile imaging device, wherein at least a first and a second image of the object are taken for different views. Further, the invention relates to a mobile imaging device for imaging an object, wherein at least a first and a second image of the object are taken for different views.
Contemporary digital cameras provide a variety of improvements such as the improved quality of the optical or image sensoring system or other improvements such as auto-focusing and metering systems. However, only few products are known to provide a user with added value functions. One such attractive function is the support of panoramic image capturing. When a user intends to capture the outward appearance of a panorama, conventional imaging devices may provide several additional controls or information, which allow the user to capture and post-process the panoramic view more easily and efficiently. Conventionally a user has to take multiple images of the panorama from several particular viewpoints instead of just one. Such a panoramic mode of conventional devices has also to be used to capture three-dimensional objects such as a car, a sculpture or a vase. In this case, multiple images of the 3D-object have to be taken from different view-points. As this kind of 3D-imaging differs from panoramic imaging, conventional devices are rarely suitable for completing an attractive three-dimensional image of an object. It may be desirable to improve these images so that they are available for use after imaging as the source of a panoramic view or a 3D-object browser or a 3D-model creator to obtain an enhanced panoramic or 3D- experience. To obtain a better post- viewing result, the user usually has to walk around the object to take all the necessary pictures. However, with conventional devices this is still difficult as no indication is given as to whether or not one is taking unnecessary pictures or whether or not sufficient images have been taken to cover all parts of the object to be imaged. For instance, pictures taken from almost the same viewpoint may be unnecessary pictures. Also, taking more than the required number of images results in wasted time and storage capacity. Conventional devices such as the one known from JP 2001119625 can be used to generate a panoramic image, but are not well suited for taking images of three-dimensional objects to be used as the source of a 3D-object view. Similarly, only devices serving as image devices for taking panoramic images are known in prior art. Such panoramic imaging devices are usually handled as follows: If the user selects a panoramic capturing mode, the white balance and exposure value will be fixed and about 40 % of the previous picture will be left on the view finder to allow the user to identify the position of the next picture to be taken. The user then moves and roughly aligns the left partial image with the view currently on the viewfinder. The 40 % overlap usually provides sufficient indications to accomplish this manual registration process. The sequence of captured images is stitched together in a subsequent offline procedure to compose the total panoramic image. For this purpose, offline panoramic software may be used. However, this approach is not very well suited to capturing static three-dimensional objects such as cars, statues and human beings. The reasons for this inadequacy of the panoramic approach are as follows: 1. In the case of panoramic, capturing users usually do not more when taking shots of surroundings. Instead in the case of object capturing they focus on the object they are interested in, walk around it and take pictures of it from different view-points. These pictures should cover as many different portions of the object as possible. Unlike in a panoramic mode, the trajectory of the camera will not be always linear. Probably, to image a 3D-object the camera will at least cover two-dimensional space in order to obtain not only horizontal views but also views of the top and bottom. 2. The panoramic approach available nowadays is a fully manual process. One has to appropriately position the camera in order to ensure that the object to be imaged is matched with the remaining 40 % from a previous image. However, the necessary approach is much more complicated for object mode capturing. It is usually unreasonable to ask the user to both identify suitable viewing positions of a subsequent shot and to remember what parts of information he may have gathered.
In summary, panoramic capturing devices of prior art do not usually provide any control or assist in indicating unnecessary images or insufficient imaging of a panorama or an object. Further, the help functions of an imaging device in a panoramic mode are restricted and focused on a linear camera trajectory and are generally a fully manual process to secure the matching of subsequent images taken. This is where the invention comes in, the object of which is to provide a method and an apparatus for conveniently imaging all kinds of objects, such as a panorama or a three-dimensional object, and in which the process of capturing an image sequence of such an object is simplified for the user. As regards the method this object is achieved by a method of imaging an object by means of a mobile imaging device, wherein at least a first and a second image of the object are taken for different views of the object, the method comprising the steps of:
- selecting the object to be imaged,
- taking the first image for a first object view and specifying first image data, - capturing at least one further image for a different object view and specifying further image data, wherein according to the invention the method further comprises the steps of processing the first and further image data automatically by the device to generate an output,
- indicating on basis of the output whether the further image is relevant to be taken as the second image.
As regards the apparatus the object is achieved by a mobile imaging device for imaging an object, wherein at least a first and a second image of the object are taken to obtain different views of the object, the mobile imaging device comprising:
- a means selecting the object for imaging, - a means taking the first image from a first view and capturing at least one further image from a different view,
- a sensoring means specifying first image data and further image data, a
- means automatic processing of first and further image data and generating an output,
- a means indicating a relevance of the further image on basis of the output. A certain position may be common to some object views. Further object views may also result from differing positions or view-points, whereby from each position the object is envisaged in a different direction.
"Capturing" an image comprises any kind of catching or reproduction of an image of the object during the process of viewing the object by means of the mobile imaging device. "Capturing" comprises e.g. transmitting an image of the object in a viewfinder of a camera, the image of which may be examined by a user on a display.
"Taking" an image comprises any kind of capturing of an image and additionally any kind of registering, storing or recording of an image. "Taking" comprises e.g. recording of an image on a storage device contained by the imaging device for a later development, display or processing of the image, or any other use of the image after the process of viewing the object is accomplished.
In the apparatus a display or a viewfinder may be used to select the object. A lens system or any other optical system may be used in combination with an optical sensor to view and capture images. Any kind of sensor such as an imaging sensor and/or position and/or motion indicating sensor may be used to gather image data. Such a sensoring means may also be part of a shutter or a viewfinder, which will also be described in the detailed description.
The proposed invention has arisen from the desire to add an intelligent indication system to a digital camera to provide the user with direct help in taking a sequence of images of an object. The main concept of the proposed invention is to adapt the intelligent indication system, in such a way that it is well-suited to capturing and taking images of all kind of objects, in particular by panoramic image sequences and also by three-dimensional image sequences. Whereas in the case of a panoramic image sequence the imaging device is moved along a more or less linear trajectory or at least within a substantial flat two- dimensional surface, in the case of a three-dimensional image sequence the imaging device is moved within three dimensions in an object-centered way to capture object-centered scenes. It was realized that the above-mentioned intelligent indication system could be performed by tracking the motion between a current view and the last image captured by the user. "Motion" in particular is an effect of changing view-points of a user.
Image data are preferably already specified automatically. Processor means are suitable for further processing of data and indicating relevance. As proposed, an intelligent indication system is realized in particular by automatic processing of at least the first and further image data by the device to generate an output and by using the output to indicate a relevance of the further image. Image data in particular comprise data of the view or perspective from which the image is taken. The term "view" comprises all kind of information such as device and object position, or parameters deduced from those positions, such as the distance between the device and the object for instance. Further, the direction from which the object is imaged or captured may form part of the data, such as the view angle for instance. Parameters of the object itself, such as the size of the object, may also be comprised by the image data. Further data may concern the circumstances of capturing or the imaging environment, such as brightness or luminance values, contrast values or color parameters. The term "processing" comprises all kinds of calculating or storing image data as indicated above. In particular processing comprises the processing of absolute object and device parameters for each individual view taken alone and also relative parameters such as parameters indicating a change between different views. In particular, the latter relative parameters are also referred to by the term "motion".
Developed configurations of the invention are further outlined in the dependent claims.
In a further method step the further image may be taken as the second image in case the further image is considered as relevant and the further image data may be assigned as the second image data. Most preferably in yet another step image data are updated to be used as first image data for a subsequent step. In such a subsequent step still further images may be captured and added to the sequence of images by taking a further image as a second image. By such means the proposed concept may be repeated as often as necessary.
The method advantageously comprises the step of indicating, on the basis of the updated image data, a further different view from which a further image can and should be taken in a subsequent step. Such an indication of a new viewpoint may be given to the user e.g. on a viewfinder display.
A user may choose predetermined imaging requirements from a variety of available imaging modes, e.g. a panoramic mode or a 3D-mode. Depending on such choice of a user it can be indicated whether the further image is relevant to be taken as the second image on the basis of the output and with respect to the predetermined image requirements.
A particular developed configuration of the method may comprise the following four steps:
1. The first image data at least can be specified by means of a frame on a viewfinder. In particular an object size could be indicated by e.g. a 3D-box. The first image is taken.
2. A further image is captured, either from the same view e.g. at another angle or from a different point of view.
3. The first and further image data are subsequently processed according to a change between the first image data of the first image and the further image data of the further image. Such a step accounts for the motion when changing from a first view to a second view, e.g. by changing an angle of view or a viewpoint or both.
4. An image may be taken, preferably in an additional step, if the change exceeds a predetermined threshold. Such a threshold may be set depending on the above-mentioned requirements and the image data. The processing of image data may also comprise the supply of a coverage rate of a first and a further image. An image can be taken automatically or the user may receive an indication that an image should be taken from a particular viewpoint.
As outlined above the proposed concept allows a better camera design. Such a camera calculates and conveys moving requirements and quantities to the user and therefore indicates all the necessary steps for taking a sequence of images so that the user just has to follow the comments given by the camera to complete the sequence of images necessary to image the whole object, either in a panoramic or a three-dimensional view.
The invention will now be described in detail with reference to the accompanying drawing. The detailed description will illustrate and describe what is considered as a preferred embodiment of the invention. It should, of course, be understood that various modifications and changes in form or detail could readily be made without departing from the spirit of the invention. It is therefore intended that the invention may not be limited to the exact form and detail shown and described herein, nor to anything less than the whole of the invention disclosed herein and as claimed hereinafter. Further the features described in the description, the drawing and the claims disclosing the invention may be essential for the invention, considered alone or in combination. The figures of the drawings illustrate in: Figure 1 a flow diagram of processes and interactions among modules of a preferred embodiment of the proposed method.
A preferred embodiment 1 of the proposed method is outlined in the following with regard to Figure 1. A computer program product may also be comprised by the preferred embodiment of the proposed concept, e.g. a camera system or image processing system may comprise such a computer program product.
Such a system includes in particular three kinds of modules: some auxiliary utility instruments 1 A, a motion estimator IB and a coverage calculator lC. The flow of processing and interactions among the modules in Figure 1 is indicated by arrows. In step 1 A the user 10 specifies the object size with the aid of an auxiliary utility instrument and makes the first shot. The auxiliary utility instrument may be a box-shaped frame 4, which helps the user 10 to specify the approximate object size, and which is utilized to compute the coverage rate of the object 3. In step IB the motion estimator starts its operation. The motion estimator retrieves the approximate moving distance and direction between consecutive images. Such motion offsets are supplied by the coverage calculator lC. The motion estimator computes and accumulates the motion parameters including directions and quantities according to the changes of scene and decides whether the present view is relevant or not. If the motion between a current view and the last image captured exceeds a specified threshold, the current view is relevant and the camera will take a shot directly or the user 10 will be notified that he needs to make a decision.
As a schematic example box 2 of the Figure 1 shows a sequence of images 2.1, 2.2, 2.3, 2.4 and 2.5 taken from respective views and covering the object O.
Afterwards, in step 1C of the preferred embodiment, the coverage calculator 1C is informed by the motion estimator IB that it needs to update the status of captured images 2. Subsequently the coverage calculator 1C may also be arranged to display the subsequent direction on a viewfinder 1 A or a LCD-display to the user. Further useful information, such as coverage rate or view-point positions (2.1 - 2.5), could also be shown on the viewfinder 1 A for reference.
In addition to the motion estimator IB and coverage calculator 1C in Figure 1 a user 10 and a shutter IE are indicated schematically. Each of these four modules provides specific commands or information to other modules and updates their own status or performs specific operations while receiving commands or information from others. Details of the functionality and responsibility are as follows:
1. The user module 10 represents the user itself. Advantageously according to the preferred embodiment the only action the user has to take is to determine whether or not to take the scene shown on the viewfinder 1 A as an image when he is informed by the camera. This action is indicated by an arrow pointing from the user 10 to the motion estimator IB.
2. Further, a shutter module IE is provided. The term "shutter" is used to represent the whole image capturing mechanism within a digital camera. This includes the lens, the CCD array, the shutter and the electronics controlling the behavior of the camera. The shutter module receives commands from a user 10 and the motion estimator module IB. When the taking of a shot is requested, the shutter is released and the current scene is taken as an image. A command is then sent to the coverage calculator 1C to update the coverage information. In addition to the techniques known from conventional devices , the shutter of the preferred embodiment also provides some internal communication channels 5 specifically adapted to communicate between the modules 1 A, IB, IC, IE and 10 as indicated by arrows 5 in Figure 1.
3. The major functionality of the coverage calculator IC is to compute the coverage rate of the object 3 according to the object size specified at the beginning by aid of 1 A and 4 and the motion offsets are computed by the motion estimator IB. When the shutter finishes capturing a picture, it sends an update command to the coverage calculator IC. Subsequently the coverage calculator IC requests the motion offsets from the motion estimator IB. After the coverage rate has been updated, the direction which should preferably be followed , i.e. the one which has been figured out by analysis of modules IB and IC is advantageously sent to the shutter and displayed on the view finder to advise the user 10.
4. The motion estimator IB performs a frame by frame comparison to retrieve the motion in between. While the camera is moving this module runs a real-time motion estimation for every frame. For positions between each of the positions of the images 2.1 - 2.5 of the image sequence of object O, the motion of the camera is accumulated and if the motion exceeds a preset threshold, the critical frame is recorded directly or a message is displayed that the user should be notified, to indicate that a further image is relevant to be supplied to the whole picture. The motion estimator IB may also deal with the query from the coverage calculator IC and provide requested motion information, i.e. indicating only those relevant frames which have to be kept. This functionality is also referred to as key- frame selection.
Further preferred embodiments of the proposed concept may comprise modules to perform data collection work following off-line processing 6, such as modules for 3D-model generation or object viewing and image stitching. In summary, to capture the outward appearance of an object and represent multiple images of the object at a later stage, the images should be taken from different viewpoints. These images can be used as the source of a 3D-browser or a 3D-model creator to obtain an enhanced 3D-experience. The user has to walk around the object and take all the pictures necessary for a sufficient 3D-view. The difficulty of making use of conventional imaging devices is that a user has to manually specify any unnecessary pictures and manually has to decide on his own whether he has taken enough pictures. The proposed concept comprises an automatic means of motion estimation to determine the motion distance and the direction between images and an automatic means for covering calculation to specify a rough object size and calculate whether an object is sufficiently covered to guarantee a sufficient view of the object.

Claims

CLAIMS:
1. Method of imaging an obj ect by means of a mobile imaging device, wherein at least a first and a second image of the object are taken to obtain different views of the object, the method comprising the steps of:
- selecting the object to be imaged, - taking the first image for a first object view and specifying first image data,
- capturing at least one further image for a different object view and specifying further image data, characterized by
- processing the first and further image data automatically by the device to generate an output,
- indicating on basis of the output whether the further image is relevant to be taken as the second image.
2. The method as claimed in claim 1, characterized in that it comprises the further step of taking the further image as the second image in case the fiirther image is considered as relevant and assigning the further image data as second image data.
3. The method as claimed in claim 2, characterized in that it comprises the further step of updating image data to be used as first image data for a subsequent step.
4. The method as claimed in claim 3, characterized in that it comprises the further step of indicating on the basis of the updated image data a further different view from which a further image can be taken in a subsequent step.
5. The method as claimed in one of the preceding claims, characterized in that the object for imaging is selected with respect to predetermined imaging requirements.
6. The method as claimed in the preceding claim, characterized in that whether or not the further image is relevant to be taken as the second image is indicated on the basis of the output and with respect to the predetermined image requirements.
7. The method as claimed in claim 5 or 6, characterized in that the image requirements may be determined by selecting an imaging mode from a group of optionally available modes.
8. The method as claimed in one of the preceding claims, characterized in that at least the first image data are specified by means of a frame on a viewfinder.
9. The method as claimed in one of the preceding claims, characterized in that the first and further image data are processed according to a change between the first image data of the first image and the further image data of the further image.
10. The method as claimed in the preceding claim, characterized in that an image is taken if the change exceeds a predetermined threshold.
11. The method as claimed in one of the preceding claims, characterized in that the step of processing of image data further comprises supplying a coverage rate of a first and a further image.
12. Mobile imaging device for imaging an object, wherein at least a first and a second image of the object are taken to obtain different views of the object, the mobile imaging device comprising:
- a means selecting the object to be imaged,
- a means taking the first image for a first object view and capturing at least one further image for a different object view,
- a sensoring means specifying first image data and further image data, - a means automatic processing of first and further image data and generating an output,
- a means indicating a relevance of the further image on basis of the output.
13. A computer program product on a medium by a computing device comprising a software code section, which induces the computing device to automatically process first and further image data to generate an output, and which indicates on the basis of the output whether the further image is relevant to be taken as the second image when the product is executed on a computing device within a method according to the preamble of claim 1.
PCT/IB2003/002330 2002-06-07 2003-05-21 Method of imaging an object and mobile imaging device WO2003105466A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003233097A AU2003233097A1 (en) 2002-06-07 2003-05-21 Method of imaging an object and mobile imaging device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02077229.9 2002-06-07
EP02077229 2002-06-07

Publications (1)

Publication Number Publication Date
WO2003105466A1 true WO2003105466A1 (en) 2003-12-18

Family

ID=29724466

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/002330 WO2003105466A1 (en) 2002-06-07 2003-05-21 Method of imaging an object and mobile imaging device

Country Status (2)

Country Link
AU (1) AU2003233097A1 (en)
WO (1) WO2003105466A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1613060A1 (en) * 2004-07-02 2006-01-04 Sony Ericsson Mobile Communications AB Capturing a sequence of images
WO2006002796A1 (en) * 2004-07-02 2006-01-12 Sony Ericsson Mobile Communications Ab Capturing a sequence of images
US9544498B2 (en) 2010-09-20 2017-01-10 Mobile Imaging In Sweden Ab Method for forming images
US9792012B2 (en) 2009-10-01 2017-10-17 Mobile Imaging In Sweden Ab Method relating to digital images

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0596136A1 (en) * 1992-05-19 1994-05-11 Nikon Corporation Read-only optomagnetic disk, and reproducing method and equipment
WO1998025402A1 (en) * 1996-12-06 1998-06-11 Flashpoint Technology, Inc. A method and system for assisting in the manual capture of overlapping images for composite image generation in a digital camera
JPH11150670A (en) * 1997-09-10 1999-06-02 Ricoh Co Ltd Camera device
WO2001074090A1 (en) * 2000-03-31 2001-10-04 Olympus Optical Co., Ltd. Method for posting three-dimensional image data and system for creating three-dimensional image
US6304284B1 (en) * 1998-03-31 2001-10-16 Intel Corporation Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0596136A1 (en) * 1992-05-19 1994-05-11 Nikon Corporation Read-only optomagnetic disk, and reproducing method and equipment
WO1998025402A1 (en) * 1996-12-06 1998-06-11 Flashpoint Technology, Inc. A method and system for assisting in the manual capture of overlapping images for composite image generation in a digital camera
JPH11150670A (en) * 1997-09-10 1999-06-02 Ricoh Co Ltd Camera device
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6304284B1 (en) * 1998-03-31 2001-10-16 Intel Corporation Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera
WO2001074090A1 (en) * 2000-03-31 2001-10-04 Olympus Optical Co., Ltd. Method for posting three-dimensional image data and system for creating three-dimensional image
EP1289317A1 (en) * 2000-03-31 2003-03-05 Olympus Optical Co., Ltd. Method for posting three-dimensional image data and system for creating three-dimensional image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 1999, no. 11 30 September 1999 (1999-09-30) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1613060A1 (en) * 2004-07-02 2006-01-04 Sony Ericsson Mobile Communications AB Capturing a sequence of images
WO2006002796A1 (en) * 2004-07-02 2006-01-12 Sony Ericsson Mobile Communications Ab Capturing a sequence of images
US8077213B2 (en) 2004-07-02 2011-12-13 Sony Ericsson Mobile Communications Ab Methods for capturing a sequence of images and related devices
US9792012B2 (en) 2009-10-01 2017-10-17 Mobile Imaging In Sweden Ab Method relating to digital images
US9544498B2 (en) 2010-09-20 2017-01-10 Mobile Imaging In Sweden Ab Method for forming images

Also Published As

Publication number Publication date
AU2003233097A1 (en) 2003-12-22

Similar Documents

Publication Publication Date Title
CN109151439B (en) Automatic tracking shooting system and method based on vision
EP2225607B1 (en) Guided photography based on image capturing device rendered user recommendations
CN103685925B (en) Camera device and image pickup processing method
US6977687B1 (en) Apparatus and method for controlling a focus position for a digital still camera
CN100574379C (en) Have panoramic shooting or inlay the digital camera of function
RU2415513C1 (en) Image recording apparatus, image recording method, image processing apparatus, image processing method and programme
JP2003110884A (en) Warning message camera and method therefor
JP2003078813A (en) Revised recapture camera and method
US8988535B2 (en) Photographing control method and apparatus according to motion of digital photographing apparatus
JP2003116046A (en) Image correction camera and method therefor
CN102907105A (en) Video camera providing videos with perceived depth
US20070008499A1 (en) Image combining system, image combining method, and program
CN103081455A (en) Portrait image synthesis from multiple images captured on a handheld device
US7388605B2 (en) Still image capturing of user-selected portions of image frames
CN109688321B (en) Electronic equipment, image display method thereof and device with storage function
US8072487B2 (en) Picture processing apparatus, picture recording apparatus, method and program thereof
JP2015073185A (en) Image processing device, image processing method and program
CN110944101A (en) Image pickup apparatus and image recording method
GB2612418A (en) Rendering image content
CN107645628B (en) Information processing method and device
WO2003105466A1 (en) Method of imaging an object and mobile imaging device
JPH07226873A (en) Automatic tracking image pickup unit
US11803101B2 (en) Method for setting the focus of a film camera
JP2008288797A (en) Imaging apparatus
JP3114888B2 (en) Camera control device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP