GB2544268A - A system, method and scanning module for producing a 3D digital model of a subject - Google Patents

A system, method and scanning module for producing a 3D digital model of a subject Download PDF

Info

Publication number
GB2544268A
GB2544268A GB1519463.2A GB201519463A GB2544268A GB 2544268 A GB2544268 A GB 2544268A GB 201519463 A GB201519463 A GB 201519463A GB 2544268 A GB2544268 A GB 2544268A
Authority
GB
United Kingdom
Prior art keywords
subject
raw data
scanning
sensors
digital model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1519463.2A
Other versions
GB201519463D0 (en
Inventor
Howells Mark
Patel Jay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plowman Craven Ltd
Original Assignee
Plowman Craven Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Plowman Craven Ltd filed Critical Plowman Craven Ltd
Priority to GB1519463.2A priority Critical patent/GB2544268A/en
Publication of GB201519463D0 publication Critical patent/GB201519463D0/en
Publication of GB2544268A publication Critical patent/GB2544268A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects
    • G06V2201/121Acquisition of 3D measurements of objects using special illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The invention provides a system for producing a three-dimensional (3D) digital model of a subject utilising a scanning module which has a plurality of static or fixed sensors that capture raw data representing unprocessed images of a subject located within a scanning volume. Each point on the surface of the scanning volume is within the field of view of at least two of the static sensors. This raw data is processed to produce the 3D digital model of the subject. In one aspect the raw data is part-processed in the scanning module in real time to produce an initial image with the system being in communication with a remote server which receives the raw data and/or the part-processed data to complete the processing and generate the 3D digital model. In a second aspect the system also includes a projector that projects a pattern onto the subject during data capture. In a third aspect the distance from at least one of the fixed sensors to the surface of the scanning volume is less than the focus distance of the sensor.

Description

A System for Producing a 3D Digital Model of a subject, a Method for Producing a 3D Digital Model of a subject and a scanning module for producing a 3D digital model of a subject
The present invention relates to a system, method and scanning module for producing a 3D digital model of a subject. 3D scanning systems recover information about points on the surface of an object, in particular their spatial co-ordinates and/or colour, and use this information to reconstruct digital models of the object which can be stored and are useful for many varied applications. The information collected by the scanner for a set of surface points is known as a “point cloud” of data, and this can be used to reconstruct the shape of the object by extrapolating to create a mesh (a process known as surface reconstruction).
Often, 3D scanners use moving depth sensors which determine distances to various points on the surface of the object as they travel around it. This type of scanning can be problematic when used to image humans, animals or other moving objects because it requires the subject to remain still for the time taken to complete the scan in order to achieve an accurate representation.
The recovery of the actual 3D co-ordinates of points on the surface of an object from a set of overlapping 2D images is known as photogrammetry, a word deriving from the Latin photo (light), gram (picture) and metry (measurement). Once 3D co-ordinates are known, a mesh or model formed from the set of co-ordinates can be used to feed into a 3D printer to produce a 3D object, to provide topographic maps, for use in manufacturing, forensics, film making and as a tool in many other fields.
Triangulation can be used to recover the actual co-ordinates of points which appear in 2D photographs or other images where depth information has been lost. Two images taken of the same point from different locations can be used together in order to recover the 3D co-ordinates by intersecting two rays (or lines of sight from the camera or detector through a particular point on an object) and applying simple trigonometry as shown in Figure 1. In order to use this method both the location and the pointing direction of both cameras must be established, i.e. the system must be calibrated so that α, β and L in Figure 1 are all known. Calibrating the system may involve finding current positions for the cameras by employing depth sensing devices such as lasers. Another common method is to use images taken by the cameras of objects with a known location and recover the camera positions and orientations this way using computer algorithms.
Images will also be analysed during processing in order to match points in two or more overlapping images of an object’s surface. Techniques such as intensity or feature matching are often used to do this, and while this has traditionally been done manually, there now exist sophisticated algorithms to allow the process to be carried out by computer. In some applications a better result is achieved by projecting patterns of light onto the object so that images can be more easily compared. In US-A-2014/0028805, for example, a moving scanner is used to produce a 3D image of an object. The scanner comprises a tracker, whose position is measured using a laser and retroreflector, and two cameras moving with the tracker which take images of the object while various patterns are projected onto it. The projected patterns are used to relate points in different images before employing triangulation (using the measured positions of the tracker and cameras) to calculate their coordinates.
The system described in EP-A-1207367 produces 3D images by projecting fringes onto the surface of an object and illuminating behind it. Two cameras are used in some embodiments and four in others. Several other patents describe the use of 2D scans to derive measurements of a subject’s body or to assess the health of a patient. EP-A-2148618, for example, describes a system for determining the physiological condition of a person from a 2D image by analysing visible factors such as the colour of the skin and WO-A-2013/169326 describes a method for examining changes in the skin of a patient by taking a number of overlapping 2D images to produce images of the whole body. Images taken at different times are then compared to detect skin abnormalities appearing in the time between consecutive images. GB-A-2449648 describes a system which uses images taken on a webcam. A measurement of the distance to a subject is used to estimate the scale of the image which then allows various measurements on the subject’s body to be calculated. In GB-A-2504711, the front view of a subject is acquired using an RGB depth sensor. A match is then found with the nearest possible 3D representation stored in a database. The back view is inferred from the front view using the data and database models.
Other documents describe 3D scanners which incorporate moving sensors or use subject rotation to take a series of images at different times (WO-A-2014/037939, EP-A-829231, WO-A-2007/102667, WO-A-2014/111391, and WO-A-2012/011068 are examples of such documents). US-A-5,953,448 and US-A-6,373,963 use phase measuring profilometry to create a 3D representation of an object in a system where light having a sinusoidal intensity distribution is projected onto an object. Interference fringes are produced, the position of which can be compared with reference values and used to retrieve information about the topography of the reflecting surface.
One potential use for 3D scanners is in gyms or health clubs. The ability of a user to record a 3D scan of their body over time will allow them to track their own progress as they undertake new fitness regimes. This has the potential go a long way towards encouraging gym users to continue with their training; however scanning systems which are cheap, compact and accurate enough to produce a useful image while still being affordable for gym-goers and health clubs are yet to appear on the market.
At present, methods of body measurement and tracking used in gyms are relatively crude and rely on body callipers, measuring tapes and weighing scales. BMI calculations and similar measures are used as an indicator of body mass and fat content. People starting personal fitness training programmes often become demotivated when they perceive very little improvement in body shape despite their efforts. Beneficial physical changes can appear imperceptibly small at first and require perseverance over a long time to see big changes.
According to a first aspect of the present invention, there is provided a system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image; the system being arranged to communicate with a server remote from the scanning module, for receiving the raw data and/or the part-processed data to complete processing and thereby generate the 3D digital model.
The system of the first aspect of the present invention provides essentially instantaneous, accurate and repeatable 3D scans with partially remote data processing and analysis. Requiring either the subject or the sensors to rotate, as many prior art systems do, is a significant disadvantage when scanning moving objects such as people because scanning times are increased and the subject is required to remain still throughout. The quality of the scan produced will be substantially decreased and there will be a higher level of discomfort for the person being scanned. Photogrammetric techniques enable accurate data capture and provide for data to be stored, measured and compared over time to provide detailed evidence of changes in an individual’s body shape. Part-processing the data in real time allows an initial image to be produced while remote processing frees up processing equipment in the scanning module more quickly, ready for the next user. While the description focusses on embodiments which use optical cameras for the sensors, it should be noted that any form of sensor capable of detecting electromagnetic radiation (or even in some cases sound waves) can also be used. It should also be noted that although all points on the surface of the scanning volume should be within view of at least two sensors while an image is being taken, this will clearly not include points on the floor of the scanning volume or booth underneath a subject’s feet. 3D scanning systems for use in gyms will preferably produce a 3D model of acceptable quality within a short scanning time. The processing carried out in the module between scans must also be minimised while still allowing customers to receive at least some information from their scan directly after it is taken. The present invention can provide these advantages as will be described in more detail below.
In an embodiment, each point on the surface of the scanning volume is within the field of view of three sensors. A view of each point from two sensors is required in order to derive three dimensional coordinates by triangulation. If each point is within the field of view of more than two sensors, however, a more accurate a derivation of the coordinates is possible. Redundancy is increased and the likelihood of occlusions or obstructed portions on the surface of the subject is reduced. Calculations can be repeated for the same point using different camera pairs and an average of the results used to give the final coordinates. In a gym, or in any environment where a compact size for the scanning module is desirable, a large number of cameras will result in too bulky a configuration. In addition, each camera or sensor forming part of the scanner will add to the price of producing a module and the price that each customer will be required to pay for it. A balance must be found, therefore, between a desired (or adequate) resolution for the images and the cost and size of each module. It has been found that positioning sensors so that each point on the surface of the scanning volume can be viewed by exactly three sensors simultaneously provides good enough coverage to be used for deriving body measurements and noting changes in appearance overtime. The number of cameras required to achieve this is also low enough to produce a module that is compact enough to be used in a gym (the module will fit within the floor space taken up by an average sized piece of gym equipment).
With four or more cameras imaging each point on the surface of the scanning volume simultaneously, a better 3D model can be produced and this may be preferable in higher end gyms where space and cost concerns are less of an issue and quality of the equipment is more important. For example, exactly four cameras may be positioned to view each point on the surface of the scanning volume simultaneously to achieve a better quality 3D model. Modules can be adapted to be adjustable in order that cameras can be added after purchase of the module if a customer wishes to upgrade. Walls of the module and mounting points (such as rails to which cameras are fixed) within the module can be adjustable to allow them to move outwards to incorporate additional cameras. Similarly, cameras can be removed in order to reduce the size of the module.
Points in this context refer to sites spaced across the surface of the scanning volume and projected onto the subject within the scanning volume at locations for which 3D coordinates are to be derived. These will preferably number no less than 1 per square millimetre on the surface of the subject. In some embodiments, points can be spaced closer together in regions of the subject which are likely to be more detailed but there should be no less thanl point per square millimetre on any part of the surface in order to ensure that the final digital model is of adequate resolution.
In an embodiment, the system further comprises an app, downloadable onto a user device and/or accessed via a web based platform, which can allow a user access to their processed images and digital model from the server. This provides a convenient way to distribute information to users. The app or platform can also provide capabilities which allow the user to manipulate their 3D model, and view from different angles (rotate and zoom in on the 3D model), or request particular measurements themselves. For example, it can allow the user to select a section across the 3D model of their body and be provided with a measurement for the circumference, width or length of the part selected. These capabilities may be provided for a 3D model within a web browser, for example.
In an embodiment, processing the images further comprises calculating measurements and statistics from the 3D digital model. In addition to processing data to produce the 3D models, processing can include subsequent analysis of the model in order to derive certain additional information. This can allow the user to easily visualise changes in measurements or sizes overtime by way of line graphs, histograms or any other type of chart or graph.
In an embodiment, the processed 3D model of the user can be manipulated to visualise a ‘desired’ body physique, for example to show larger muscle growth on the arms and legs, and show a flatter abdomen. The system can predict how the user might look after a certain training/fitness plan had been accomplished - visualising the end goal to provide motivation.
In an embodiment, processing the images further comprises analysing the images to assess skin conditions. Information such as colour of parts of a subject’s body can be used to assess health of the subject. Changes over time for repeat users of the scanner can also be pinpointed in this way.
In an embodiment, the sensors comprise optical digital cameras. Optical digital cameras of high resolution are available on the market at a good price and are suitable for use in the system of the present invention. These will in general include CCDs or CMOS which enable data to be efficiently fed from the camera at intervals and stored digitally. This capability can be useful when a number of scans are likely to be recorded within a short space of time. Cameras such as the Raspberry Pi camera module are also particularly suitable.
In an embodiment, the system is arranged to operate in response to an initial user input, without further user interaction. Once a user has initiated the scan (for example by pressing a trigger) the process of data capture by the sensors, the transfer of data to elsewhere within the module, to a server or data storage facility as well as the processing of the raw data to produce the 3D digital model will all take place automatically, without further input from the user. This means that the module can be operated easily in a gym environment without the need for the presence of skilled technicians or programmers. The initial user input can simply comprise initiating the scan but can also include several selections made by the user or personnel working in a gym prior to initiating the scan. They can, for example, decide how they wish the part-processed image to be displayed or which image they wish to display within the booth as well as which prior data they wish to compare with the current scan data to produce overlaid images or statistics.
In an embodiment, the raw data is captured by all sensors at substantially the same time. It is preferable to minimise the time taken to collect raw data for each scan. This can be achieved in part by ensuring that all of the sensors (e.g. the cameras) are configured to capture the data at the same time and that the signal used to initiate data capture by the sensors can be transferred to all of the sensors simultaneously. The shutter speeds and apertures, where applicable, can also be configured to be similar for all sensors.
In an embodiment, the scanning module further comprises a system for illuminating the subject during image capture. Providing some type of “flash” during image capture can help to achieve a better image, particularly in darkened environments. It is preferable for the background to remain dimly lit or unlit in order to provide contrast with the illuminated subject. Flashes may be integral to the sensors or cameras used, or may be external and placed at various points inside the module as described in more detail below.
In an embodiment, at least two images are taken for each scan, wherein during one image capture event the subject is illuminated and during another image capture event a pattern is projected onto the subject. The image taken with a projected pattern can be used to improve the accuracy of the derivation of point coordinates by allowing the same point in different images to be more easily picked out (to help solve the so called correspondence problem). The other image taken with the subject illuminated by a flash can be projected onto the mesh produced to provide a more realistic, real-colour 3D digital model.
In an embodiment, at least two images are taken for each scan, one of which is taken through a polarizing filter and the other through a filter having a different polarization or without a polarizing filter. In the image taken through the polarizing filter, polarized light reflected from parts of the body such as the forehead will not reach the camera (much like how a polarizing filter or sunglasses can reduce glare due to light reflected from the surface of a lake). The resulting image will show a more realistic colouring and will not be affected by glare.
In an embodiment, the system for illuminating the subject comprises one or more LED lightbulbs. Although any light source can be used, LED flashes can strobe faster than other light sources and therefore are preferred. This means that if two images are to be taken in close succession, the time between consecutive images can be minimised (as compared to using a xenon light source or a light source of any other kind). LED light sources may be more expensive than more traditional options which can be used in order to provide a cheaper alternative.
In an embodiment, the light sources are placed behind a diffusing material to distribute the light evenly over the subject. This method will also limit the shadows and overexposed highlights on the subject thus produces a better quality colour texture of the subject. Acrylic LED light diffusing panels having a thickness of approximately 3mm are used to disperse the light to control the intensity of the light. These panels are lightweight and are optimised for use with LEDs. The light transmission through the panels may be around 60% but the particular specifications will depend on the type of flash lighting used and the desired effect. Panels can be removable and replaceable within the module to aid repair and to allow a different effect to be achieved if desired by changing the colour, material or light transmitting properties of the panels.
In an embodiment, the system further comprises a positioning system for ensuring that the subject is within the scanning volume and in a desired pose. In general a subject will fit within the scanning volume, however the smaller the scanning volume, the more compact the overall module. The size of subjects will vary somewhat so that designing the scanning volume to fit the largest anticipated subject size in all poses may not be practical in terms of providing a compact and cost effective module. This means that in some situations parts of a subject’s body may extend out of the scanning volume in some poses. Because of this, in order for an image to cover the whole of a subject, they need to be standing in the correct position on the floor in order to place them entirely within the scanning volume and within the field of view of the sensors.
In an embodiment, the positioning system comprises one or more handles for the subject to hold onto during the scan. Handles can provide a useful way to encourage the user to position their arms correctly (e.g. by their sides and slightly out from the body in order to ensure that as much of the body as possible is visible). A trigger or button can also be provided on the handle so that a scan can be initiated easily without the subject moving out of the pose.
In an embodiment, the positioning system comprises a live video feed of the subject shown on one or more screens within the module and overlaid with an outline indicating the optimum pose. A video can be combined with the handles above and markings on the floor indicating where a subject should stand. The subject can then use the video screen and outline in order to correct their position to ensure that their live image is located within the outline on the video screen and hence within the scanning volume.
In an embodiment, the system further comprises a device for measuring and recording information relating to the weight and/or biometric data of the subject.
Biometric data can include information relating to fat, muscle, bone and water content. This data might be collected using bio-electrical impedance technology. This might be of the type used by Boditrax Technologies Ltd (based in Nottingham), for example. When this type of technology is used the subject touches a number of electrodes at which point current is passed through the body and the resulting impedance (higher where fat is encountered than in hydrated muscle tissue) is measured to provide information relating to body composition. If weight is being recorded, the device may be a set of weighing scales located within the booth. These scales will provide an indication to the subject of where to stand in addition to weight data which will complement the raw scan data. Data from the scales can initially be transmitted by wired or wireless connection to a central data store within the module or directly to the server.
In an embodiment, the information relating to the weight and/or biometric data of the subject is stored on the server. The weight information can be stored with the raw, processed or part-processed scan data relating to a particular subject and can be accessed by the subject. During processing, weight information can be used to generate various measures, such as a BMI value.
In an embodiment, the system further comprises means to measure and visualise changes to the 3D data using heat maps and comparison images of some or all of the surface of the subject. Heat maps can be overlaid onto the 3D mesh (in a similar way to the flash image). These heat maps show changes between scans taken at different times, coloured to illustrate the extent of the change that has occurred.
Images overlaid with temperature data can also be provided. Temperature may provide an indication as to where fat is stored on the body, or which muscles have been utilised the most during a workout. In order to provide temperature data covering the whole subject, temperature data will need to be collected for a set of points which can represent the whole extent of the body. One or more temperature detectors can be used, although in order to measure temperature over the whole body either a moving detector or a number of detectors will need to be used (or the subject rotated). Alternatively, a simpler temperature map can be provided of just the front of the subject or an image from just one location using one temperature sensor. This image can still be overlaid onto the mesh as viewed from the direction of the temperature sensor. The data can also be used to guess how a temperature map might look from angles for which data is not available and to provide a representation for overlaying onto the 3D mesh. Temperature data can also be presented in the form of statistical data as for weight rather than, or as well as, projecting onto the mesh. Images taken at different times can be compared to assess changes in the temperature maps over time for a particular subject.
In an embodiment, the heat maps and comparison images of the subject’s body are stored on the server. Comparison data can be stored with the scan data in a similar manner to the weight data mentioned above and can be accessed by the user after the scan.
In an embodiment, the digital 3D models are digitally altered during processing to correct posture and/or to account for breathing. It is likely that when scans are taken at different times the user will not be standing in exactly the same position. This may be due to movement when breathing or simply due to a change in posture between scans.
In order to properly compare scans it is preferable that the positioning of the subject is as similar as possible during both. The positioning can be digitally altered to bring the positioning into line as part of the post-processing. This can be done automatically at the server (or in the module during the initial processing).
In an embodiment, the system comprises a server, remote from the scanning module for receiving the raw data and/or the part-processed data to complete processing and thereby generate the 3D digital model.
According to a second aspect of the present invention, there is provided a system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a projector for projecting a pattern onto the subject during image capture, the pattern comprising a light coloured image overlaid with a dark coloured image; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data. Including a light and a dark coloured image or pattern as part of the projected image ensures that the pattern will be visible on different skin tones.
In an embodiment, the light coloured image and the dark coloured image each comprise a plurality of smaller patterns that are repeated. In an embodiment, the repeated patterns comprise grids. The grid provides simple straight lines which will deviate across the contours of the body’s surface. The software is therefore provided with both a regular fringe pattern (grid) and irregular noise pattern.
In an embodiment, the grids are square grids. In an embodiment, the light and dark coloured grids are of the same size. Using two patterns having the same shape and size simplifies production and possibly also processing, since the projected linear grid pattern will deviate around the users body to visually provide contour lines, thus helping the system to digitally reconstruct the subjects body.
In an embodiment, the light and dark coloured images are offset from each another. In an embodiment, the light and dark coloured images are offset from each other in two orthogonal directions. This ensures that both patterns are visible, particularly where the two patterns are identical or are formed of a series of smaller patterns that are repeated to form the whole.
According to a third aspect of the present invention, there is provided a system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the respective sensor; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
In an embodiment, the distance from at least half of the sensors to the surface of the scanning volume is less than the focus distance of the respective sensors.
In an embodiment, the sensors are positioned such that the distance from each of the sensors to the surface of the scanning volume is less than the focus distance of the respective sensor.
According to a fourth aspect of the present invention, there is provided a method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; part-processing the raw data in the scanning module in real time to produce an initial image; communicating the raw data and/or part-processed data to a server remote from the scanning module for completion of processing of the data to generate the 3D digital model.
In an embodiment, the method comprises at the remote server, processing the raw data and/or part processed data to generate the 3D digital model.
According to a fifth aspect of the present invention, there is provided a method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; projecting a pattern onto the subject during data capture, the pattern comprising a light coloured image overlaid with a dark coloured image; processing the raw data to generate a 3D digital model from the raw data.
According to a sixth aspect of the present invention, there is provided a method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the sensor; processing the raw data to generate a 3D digital model from the raw data.
According to a seventh aspect of the present invention, there is provided a scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image and being arranged to upload the initial image to a server for completion of processing to thereby generate the 3D digital model.
According to an eighth aspect of the present invention, there is provided a scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a projector for projecting a pattern onto the subject during data capture, the pattern comprising a light coloured image overlaid with a dark coloured image; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
According to a ninth aspect of the present invention, there is provided a scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors, and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the sensor; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
Embodiments of the present invention will now be described in detail with reference to the accompanying drawings, in which:
Figure 1 illustrates how 3D coordinates are calculated by triangulating;
Figure 2 shows an overview of the system;
Figure 3 shows the architecture of the system including the back-end and user devices;
Figure 4 shows the appearance of a scanning module;
Figure 5A shows a scanning booth from above;
Figure 5B shows a perspective view of a scanning booth;
Figure 5C shows a perspective view of the inner frame of a scanning booth;
Figure 6 shows an example of a projection pattern for projecting onto the subject during a scan;
Figure 7 shows a user with a pattern projected onto them during a scan;
Figure 8 shows a floor based grid and control pattern;
Figure 9 shows a 3D model comparison of two data sets as a heat map to highlight the changes;
Figure 10 shows some examples of statistical data derived from 3D models and complementary data collected in the scanning module and displayed in graphical form;
Figure 11 shows overlaid scans taken at different time to illustrate changes in a subject’s body shape;
Figure 12 shows an example of a display screen including the 3D scan and measurement data calculated from the scan;
Figure 2 shows an overview of the system which comprises a scanning module, a client app or web portal and a back end including at least one secure web server and a database on the server or coupled to the server and Figure 3 shows the architecture of the system in more detail. The scanning module or booth includes scanning means for collecting the raw data needed to produce a 3D image. Data capture generally uses cameras or sensors mounted within the module (along with illumination and positioning systems such as those described in detail below). Data is transferred to and stored on the server or servers in cloud storage after a scan. Processing can take place before, after or during upload to the server. Once data has been uploaded a user can either download an app onto a personal device such as a laptop, tablet, mobile phone or iPad® which allows them to retrieve their data, or they can use a web portal. In order to provide adequate security, personal login details and/or passwords will generally be required in order to access data associated with a particular user on the server.
The scanning module comprises controls to allow the user to initiate scanning, a 3D capture system including cameras having sensors which convert the collected light into electronic signals to be passed to a processing system, and a lighting system for illuminating the subject. The module will also contain an interactive display screen and may also include other devices such as the weighing scales for collecting complementary data.
Figure 4 shows the inside of a scanning module. Cameras can be mounted on the wall of the module using brackets (not shown) but can also be attached to a frame which is installed within the walls of the module. Brackets are more efficient in terms of space and are more attractive visually but a frame structure may be easier to install. If the cameras are required to move then a frame may be necessary in order to allow cameras to slide along a pole or rail in the correct direction. Cameras can also be arranged to be held in moveable brackets and configured to point in different directions during the scan in order to view different parts of the subject. It is usually preferable, however, for cameras to remain still during a scan to reduce scanning time. The module will contain enough cameras that each point on the surface of a scanning volume in which the subject stands is imaged simultaneously by at least two cameras (or other sensors) and preferably three sensors simultaneously.
Brackets should in any case be moveable to some extent, or the camera position adjustable in a bracket or along a rail, in order to allow for recalibration of the camera positions and pointing directions. Camera position and orientation can be automatically adjusted if the system is self-calibrating. These can also be adjustable either via a computer within the module, by remotely controlling the positions, or by physically entering the booth and manually re-adjusting. The remote re-adjustment capability may be useful if expert adjustment is required to save the cost of transporting a trained person to the location. It may be easier to carry out this calibration step during installation of the module; however this can be done prior to installation. Recalibration may be necessary after a time and may require the attention of a skilled technician; however module owners can be given instructions and the necessary equipment to carry out the calibration steps themselves once the module is installed. An automatic calibration system can also be set up so that before each scan, after a certain time period, or on request from a user, the system is able to automatically calibrate itself. The calibration can be achieved by imaging points within the booth that have a known location (X, Y and Z coordinates are known). Markings may be located on the walls or floor of the module for use during calibration. Small coloured spots are particularly suitable for use as markings but any method that distinguishes one position in the module from its surroundings is suitable. Equipment already present in the module can be used, such as the positioning hand rail, interactive screen or markings on the floor used to indicate to a user where they should stand during a scan. A module may comprise an outer structure and an inner structure. The cameras, lighting and other equipment can be mounted on the inner structure while the outer structure provides a protective shell and a more attractive appearance from the outside of the module. The inner structure may be a frame-like structure as described above, may be made up of several panels or a continuous piece. A frame structure may be more convenient in terms of mounting the cameras and assembling the booth. If the inner or outer structures are panelled, joints may be hinged or clipped.
Camera type, number, position and orientation are optimised in order to be able to provide a module that will fit into the same space as an average piece of gym equipment whilst still displaying a good quality image. Here “space” generally refers to floor space, however the module must be able to be installed in a typical gym environment so it must not be taller than the ceiling height. One possible design for the module is shown on the right of Figure 4. This design minimises the space taken up by the module (given the positions of the cameras within) and minimises complexity for ease of manufacture. Modules can comprise a covered entrance to allow users privacy during the scan and possibly a place to change clothing beforehand.
When cameras are to remain static during a scan the module will preferably contain between 10 and 200 cameras and more preferably between 30 and 125 cameras. This number will depend on the focal length, field of view (FoV) of the lens used, megapixel count and the sensor design as well as other factors. If moving cameras are used, the module will preferably contain between 5 and 50 cameras. Modules can also be optimised to fit different spaces. If a gym only has a certain amount of space available for a module, wider lenses can be used in order to allow for a smaller overall structure.
It has been found that surprisingly good results can be achieved with cameras positioned closer to the subject than their suggested focus distance. This allows the booth to be more compact whilst still achieving good quality images. As an example, using Raspberry Pi cameras, for which the suggested minimum focus distance is 0.7m, a good image can result from placing cameras between 0.35m and 0.7m from the subject. Even though the image captured by each sensor will not be perfectly in focus, the processing software can achieve an accurate 3D model. The result is improved further by use of the projectors and optimisation of cameras. With a shorter focus distance it is generally preferable to position sensors so that at least three image each point on the surface of the subject simultaneously. However, it is still possible to position fewer cameras to image each point in cases where it is important to keep costs down. Sensors can be placed at different distances from the subject so that only a number of the sensors (which can be any proportion of the total number) are placed closer to the surface of the scanning volume than the focus distance. Good results are achieved with some sensors placed 0.35 meters from the subject (or scanning volume surface) and some placed 1 meter from the subject (or scanning volume surface). For example, half the sensors can be placed at 0.35m distance and half at 1 m distance from the subject to ensure a good coverage of the subject within the scanning volume.
An example of one way in which cameras may be mounted is shown in Figure 5A in which a scanning module is shown from above. The booth in this case is rectangular (although it may be any other shape such as hexagonal, square and so on). Two sides measure 1.4 (or in the range of 0.8 to 2) meters along and the other two measure 1.8 (or in the range of 1 to 2.5) meters. 6 cameras or sensors 5 are mounted in each corner along a pole or frame at different heights. Cameras or sensors 5 are also mounted at different heights on poles part way along each side of the booth. In Figure 5A poles are shown mounted at roughly equal distances from one another along the back wall (distance between sensors may be roughly 40cm not including the area taken up by the sensors themselves). Further poles are mounted 40cm from the corner poles along the side and front walls. Vertical poles (13 in this example) may contain 6 cameras separated vertically along the poles by roughly the same distance with the uppermost camera at a height of between 1.5 and 3 meters, more preferably between 1.8 and 2.4 meters, and preferably 2 meters (slightly taller than a typical user of the booth). The cameras at the corners are located furthest from the subject in the centreline of the booth (around 1.1 meters from the centreline if 1.4 and 1.8 meter sides are used) and these will be able to see most of the subject. The cameras mounted halfway along the longer sides are the closest to the subject (0.7 meters from the centreline if 1.4 and 1.8 meter sides are used). Although specific measurements are given here, it is not necessary that these be adhered to so long as sufficient cameras or sensors are present to allow each point on the surface of the scanning volume to be imaged by at least two sensors. One pole may be provided halfway along each side wall, for example. In another example cameras can be mounted closer to the centreline or subject. In embodiments additional poles or fewer poles may be used. Some poles may support fewer (or more) than six cameras spaced at different heights. In an embodiment, the corner poles as well as those along the longer sides each hold six cameras, while poles on the short sides hold fewer (for example three) cameras.
In the example shown, no sensors are mounted in the area of the booth 6 through which the subject enters and exits. In embodiments a door may be provided on which further sensors can be mounted. This door can be opened (for example on by way of a hinge) to allow a subject to enter and exit the booth and can be closed once a subject is inside in order to position the sensors in the correct position for scanning. The inner wall 1 is formed from diffusing material such as PVC and lights 3 for illuminating the subject are mounted behind this in order that light is diffused before hitting the subject to provide a more uniform lighting. In the figure sensors 5 are coupled to raspberry pi modules 4 which provide some level of control and store or relay data.
Lights 3 (which may be LED lights) can also be coupled to and controlled by the raspberry pi modules. There must be enough leeway to allow the sensors to be repositioned during calibration or readjustment. LED lights may be mounted to the inner wall, suspended between the inner and outer walls (for example by hanging from an upper frame or roof or by attachment to poles supporting the sensors) or may be mounted to the outer walls.
Outer wall 2 may be formed of diffusing material similar to the inner wall to provide an illuminating effect and an attractive appearance from the outside of the booth. Alternatively, the outer wall may be opaque or mirrored on the inner surface to increase the intensity of the light inside the booth during image capture. Alternatively, an additional frosted external wall may be provided that can be illuminated with coloured or plain lights to improve the exterior appearance whilst still including an opaque or mirrored wall sandwiched between the frosted wall and the inner diffusing wall through which the light for illuminating the subject will pass. Corners of the outer layer may be rounded (such as corner 7) or squared (such as corner 8). All corners may be rounded, all squared, or some rounded and some squared depending on the desired location and appearance.
Figure 5B shows a perspective view of the outside of a booth according to one possible design. The outer shell or wall 2 with rounded corners is visible as well as roof 9. Including a roof structure is preferable to keep as much light as possible within the booth and for privacy reasons. Arrows show the route into and out of door 6. Figure 5C shows one possible configuration of the inner structure. Poles 11 carrying sensors 5 are supported vertically between panels 10 which are formed of a diffusing material. In the example shown panels are approximately 40cm wide. Horizontal poles may also be included to lend support to the vertical structures shown in Figure 5C.
Projectors, computers, switches, and other equipment that may be mounted within the booth are not shown. These may be provided on a console or terminal either inside or outside of the booth. Mounting a user interface and switches together inside the booth may be more convenient if the booth is designed to be operated by the user themselves. Multiple control terminals may be provided so that the booth can be operated by the subject and a person outside of the terminal. There may be override capability.
Cameras may be arranged so as to automatically move closer to the subject prior to a scan. For example, the configuration of the sensors in the booth when open may be different to the configuration when the booth is closed or when the scan is initiated. Cameras may move on telescoping stands (or in any other way) to a predetermined position before the start of the scanning process and movement of the cameras into position may, in embodiments, occur in response to the closing of the booth, to detection of a subject within the scanning volume or booth, to a certain time having elapsed from scan initiation or booth entry or to some user input.
Optimisation can be achieved using computer programmes which allow the parameters which are to be optimised and particular constraints to be met to be used as input for an optimisation model. One constraint could, for example, be the requirement that cameras are located and pointed so that three cameras image each point on the scanning volume simultaneously. Other constraints might include a maximum volume for the module, maximum scanning time or some camera properties if the sensor type is to be fixed. The programme can then return the optimum camera positions and orientations for the scanning module (along with camera properties if these are not fixed).
As mentioned, cameras may be static and located so as to surround the subject or they may be mounted on moveable frames and moved around the subject and/or upwards and downwards within the module while consecutive images are taken in order to provide a full 360 degree view. The latter requires fewer cameras, however a single scan will take longer to complete. Where scanning is taking place in a gym, users will understandably want the scanning process to be as quick as possible and users are likely to move during a longer scan which will reduce the image quality. Where cameras are static they will completely surround the subject in order that a scan can be recorded essentially instantaneously. In other words, when a scan is taken all of the cameras will capture an image at roughly the same time. Cameras will be required to be linked in order that signals can be sent to the entire set of cameras at once.
It is preferred that each point on the subject’s body (other than, of course, the soles of their feet or other obscured portions) is within the field of view of at least two, and more preferably exactly three cameras during the scan. This will require images from different cameras to overlap. Although only two cameras in stereo are required to be directed towards a given point on the surface of an object in order to calculate distance to the point, having three cameras directed at each point makes the calculation of 3D coordinates more accurate/reduces errors whilst still keeping the number of cameras required inside the module at an affordable level for gym owners. The system of the present invention can achieve a resolution of one point per 0.5 to - 5 mm. This resolution will, however, depend on the number of cameras imaging each point on the surface of the subject simultaneously.
Images for use in gyms and for other similar applications do not need to be as sharp as those used, for example, in producing CGI for films. Less data can also mean faster processing speeds so that the user can view their data more quickly. Light focussed by the camera lenses is collected by an image sensor such as a CCD or a CMOS (light may also be split in order to differentiate between colours and directed to illuminate several CCDs). The data from the sensor is passed to one or more computers or storage devices within the module. There will preferably be only one central storage device within the module to which all of the cameras will transfer their data during a scan. This storage device may provide only temporary storage for the raw data before transferring processed, part-processed or raw data to the server.
When a scan is to be taken, the user will enter the scanning module through a door or curtain. At this stage lights can be turned on in order to allow the user to familiarise themselves with the layout of the module. The scanning module comprises a user interface (for example a touch screen or keyboard and screen) which allows the user to enter their username and password on entry. The screen can then show the user information to assist with alignment prior to the start of the scan such as by means of a video and outline as described above. Outlines might be based on a previous scan for that user or calculated using video camera footage to determine the size and rough shape of a subject. The outline will generally be shown on the screen in front of a live video. Videos and outlines might be shown for several views at once (for example one side view and one front view) to help the user to position themselves more accurately.
Instead of or in addition to prompts on the screen the module can be provided with a physical device (such as the handles described above) to help with alignment.
This aid might be in the form of markings on the floor of the module which show the user where to stand (footprints, for example). There may also be a transparent board or a pole to stand against. Handles are particularly advantageous when spaced apart enough to encourage users to hold their arms slightly outward during the scan, which exposes a greater proportion of their body. Handles or other devices can also be adjustable to fit different users. A scale can be provided so that the user can adjust the handles to the same height for consecutive scans. A record can also be made of the adjustment position and stored along with the rest of the user’s data for access at a later date.
Where handles are used, either a button is provided on the handle in order that the user can start the scan and remain in position, ora delay is provided between initiation of a scan by the user and capture of the image to give them time to take up the necessary pose. Voice activation can also be used to start a scan.
The interactive screen or user interface can also include a help option which, when selected, will lead the user to an instructional video which will guide them through taking an image. This may help to ensure that the user is standing in the correct pose during the scan. Speakers can be included in the module in order to allow a user to hear instructions which, again, can help to guide them through the scanning process or to correct them when, for example, they need to alter their position.
When the user is ready and in position, they will initiate the scan at which point lights in the module will be dimmed or turned off and the sensors will begin collecting image data. As mentioned, a subject may be illuminated by one or more flash bulbs during the scan in order to provide better contrast with the background and to give a more realistic representation in the final image to be presented to the user. This is particularly useful if a full colour image is desired. The background may be white in order to help illuminate the subject (due to light bouncing from the walls of the booth and back onto the subject during image capture). In some embodiments, the background may also be specifically designed to be dark (using a dark module wall or curtain for example) in order to increase the contrast between the illuminated subject and the background. In a preferred embodiment, two or more images are taken in quick succession, one with the image illuminated by flash and the other with a pattern projected onto the user to aid subsequent processing steps. This pattern can include a random component which will help the system to accurately reproduce the 3D object as explained in more detail below.
An example of a possible pattern for projection onto the subject is shown in Figure 6. This pattern comprises a white grid overlaid on top of (or underneath) a black grid. Each square of the resultant black and white grid contains a number (in this case around 400) of small squares which are coloured randomly in either black or white. Here only black and white squares are shown, however a number of different shades of grey can be used, or black, white and grey squares only. The pattern could also comprise a series of different colours but shades are often easier for processing software to interpret. The random pattern ensures that all squares of the grid look different enabling the processing software to more accurately match areas on the object appearing in images from different sensors. The processor will not, for example, confuse parts of the surface of the subject which appear similar or have similar shapes. In embodiments, the randomised pattern could be repeated over a number of grid squares (a large enough area that the processor will not confuse one grid square with another), however a completely randomised pattern ensures that all squares of the grid appear different which will aid the processing software.
Including both a black and a white grid ensures that the grid will be clearly visible on different skin tones. The white grid, for example, might show up better on dark skin whereas the black grid will provide a clearer image when projected onto users with lighter skin tones. In Figure 6 the two grids are offset by half a square in the vertical direction (in the direction of arrow X) and the horizontal direction (arrow Y). The offsets are marked in the figure as Xoffset and Y0ffset· This offset means that both patterns are easily visible. Offsetting by half a square minimises any possible confusion between the two grids by the processing software.
The second, white light (flash) image allows a real colour 3D model of the subject to be produced. If the white light image is taken second (immediately following the image taken with the pattern projected onto the user) then this will be captured almost instantaneously following the grid-projected image. The order of the images can be reversed (the flash image taken immediately preceding the image with the projection) and image capture will still be almost instantaneous.
As with the sensor data, a projected pattern covering over the whole body is important in terms of producing a good 3D representation of the subject. To this end, several projectors may be included within the scanning booth. Wall mounting the projectors to face the subject is a convenient and space-saving way to fix their position within the booth (which, like the scanners, can be adjustable). However, projectors can sit on pedestals or similar objects and will project onto the subject from different angles to achieve as close as possible to a full body coverage. Again this will not include the soles of the feet or obstructed portions of the body. It has been found that using 5 to 15 projectors, and preferably 8 to 12 projectors allows a good coverage to be achieved while still keeping costs and the overall size of the system to a minimum. Non-HD projectors can be utilised and also those that are short throw are ideal, however projectors with narrow throw can be used non-hd projectors can be used.. Also, similar to a standard projector, a gobo projector would also work.
Although the example given above refers to a square grid for use in the projected image, any pattern can be used. If light and dark components are used then these must both be visible. Offsetting the two patterns can help ensure that they can be easily distinguished by the processing software. Non-repeating patterns or repeating patterns such as a matrix of dots, hexagons, rectangles, parallelograms, stripes or any other shape can be used. The light and dark images will not necessarily be identical in size and shape, however using similar shapes does help to ensure that once they have been offset, both are easily visible over the whole extent of the projected pattern. A number of images may be recorded wherein different patterns are projected onto the body. For example, if a grid pattern is being used, a closer grid might be projected onto the subject first and an image taken. Figure 7 shows an example of a subject with a fairly wide grid pattern projected on them. A second, courser or narrower grid might then be projected during capture of a second image. The wide grid image can be used for processing parts of the image for which the surface of the subject is smoother (e.g. where the appearance of the surface is changing more slowly) and the course grid used to process rougher or more complex parts. This will help to minimise processing power and maximise speed, however the more images that are taken during a single scan, the longer a scan will take to complete. Alternatively, the projection pattern can be designed in order to use a courser grid or smaller pattern in areas which are expected to be more complex and a wider grid or larger pattern in areas which are expected to be smooth. Any pattern can be used for the projection, however regular patterns such as lines or grids are preferred.
Polarising filters can also be used to produce images in which reflection (for example shine or sweaty skin from the body of the user) is reduced. To achieve this, a polarising filter is placed over the light source and also the camera lens. This will work if just one image is taken; however at least two images may be taken, one with and one without a polarising filter between the subject and the lens (or with filters orientated at different angles). A rotating polarising filter in front of a lens/sensor will only allow polarised light to pass through the filter when it is perpendicular to the reflected light, thus reducing the reflection in the resulting image. Here the quicker LED flash bulbs are particularly advantageous; however any type of flash bulb can be used. The above sequences can be used in series or can be combined, for example a polariser can be used while taking the illuminated image and not while the pattern is projected.
Either before or after the scan the user can be prompted to enter additional information such as age, gender, skin tone, height and so on to assist with providing an accurate image. Information, or a subset of information associated with a number of users can also, in some embodiments, be accessed by the health club or other external bodies in order to facilitate a statistical analysis of data for a number of members at once.
Once the necessary raw image data has been recorded by the cameras it is transferred to the processor within the module where it is part-processed to produce an initial image. This image will appear on the interactive screen and the user may be able to zoom or rotate it to an extent. Raw scan data (and/or the part-processed data) is then uploaded to the server to continue processing.
Here image processing refers to the process of producing a 3D model from raw data and/or calculating measurements and statistics from the 3D model. The calculation of statistics may require information or measurements which are not derived from the 3D image (such as weight measurements, temperature data, heartrate data, or information manually entered by the user).
Processing at the scanner may result in a backlog of data waiting to be sent to the server if a large number of users are operating the system in a short space of time. The processing of raw data in the scanning module can also lead to security issues because the scanning module will store the processed 3D models. Preferably, raw data is part-processed within the scanning module to produce an initial image for quick viewing (which may then be deleted). The module will contain a processor such as a computer which may be located in or near to the module. Data captured by the cameras will be transferred to the processor by wired or wireless connection so that realisation of the initial image can begin.
Once the initial image has been displayed to the user, the system communicates with a server remote from the scanning module for receiving the raw data (or part-processed data) so that processing can be completed remotely. The initial image may comprise any image arising from part-processing of the image data, for example a view of a part of the body. As an example, enough processing may be completed within the module to allow viewing of a 3D image of the user’s body from one angle (which may be chosen by the user). The rest of the processing may then comprise completing production of the model in order to allow a full 360 degree view and performing calculations on the model and additional information in order to provide statistical data along with the images. A user will have to use their app or log onto a website at a later time in order to view the completed processed images associated with their scan. The module can contain an interactive screen which can advise the user how long they may need to wait before attempting to access their processed images. This waiting time can be adapted depending on the backlog of data waiting to be uploaded and processed. It is also possible that some of the image processing can take place on the user device so that only raw data is stored elsewhere. This may benefit users in terms of lowering the risk associated with others accessing their scans.
Processing techniques used in the prior art generally require an expert to manipulate raw data in several stages in order to produce an acceptable 3D image that is substantially free of holes. The system of the present invention includes an automatic processing step which ensures that the only input required is a user request for a scan or a user request to view details of a scan. As mentioned, the automatic processing includes the production of the 3D digital model from the raw data and, in embodiments, can be complemented with the determination of measurements, statistics and cross sections from the image and any complementary data.
Processing can also comprise digitally altering the 3D model. This capability can be used as a motivational tool to show a user how they might be expected to appear after following a particular regime for a given amount of time, or after continuing with their current regime. A user’s pose can be digitally altered to allow a better comparison with earlier scans and images can be adjusted to account for differences in breathing across consecutive scans. Again, this processing step can be carried out automatically after realising the 3D model or on request by a user.
Control patterns can also be provided on the portion of the booth floor where a subject stands during the scan. This control pattern helps with calibration of the image in that it enables the processing element to automatically scale the 3D model and set the orientation of the image. The control pattern can be fixed onto the floor or can comprise a mat (for example a rubber mat with markings) for placing on the floor of the booth. The pattern can also be projected onto the floor, however a fixed pattern may be easier and cheaper because an additional projector is not required and the presence of the subject will not interfere with the projected image. An example of one possible control pattern is shown in Figure 8. The pattern can include grid lines 12 and spots 13 to help with positioning and sizing. Spots 13 are located at a particular distance from each other.
This distance is provided to the processor in order to calibrate the system. All other distances can be calculated relative to this initial distance and the 3D model sized accordingly. The distance between spots may be between 5 and 50 centimetres. Footprints 14 for positioning can (but do not need to) be included on top of the control pattern in order to encourage the subject to orientate themselves so that the pattern is not obscured. Both a light coloured grid 15 and a darker coloured grid 16 can be used as shown. Again this allows the grid to show up on any background.
The grid may also include a colour chart to provide a colour reference to the processing software. In Figure 8 the colour chart is provided along two side panels 17. The colour of the side panels change from black at one end 18 to white at the other end 19 (the change may be stepped, as shown, or continuous). Any number of panels such as this may be provided. Colour calibration is achieved using these charts. Although the lighting within a booth may cause an image to appear slightly yellower than in reality (for example) the processing software will know that the colour at the end 19 of the colour panel is white and will adjust other colours in the image accordingly. Although calibration of this type is possible with only one colour, a better result can be achieved with a range of shades or colours as shown in Figure 8. As an extension or addition to this colour reference, primary and secondary colours in the colour spectrum (red, green, blue etc) can be introduced here to further enhance the accurate reproduction of colours for the resulting colour 3D model.
Once processing is complete the measurements and statistics are stored in a database on the server along with the finalised images and models. A single processed model (data for a 3D image of one person or object) may take up around 50-200Mb.
Along with the raw scan data and/or part-processed images, the date and time of each scan, as well as any additional information input by the user, can be sent to the server for storage. All of the data specific to a user will be stored in an allocated storage area and is accessible to them once they have entered their login details and/or password. Other machinery or sensors can also be incorporated into the scanning module to record complimentary data such as weight measurements, heart rate, blood pressure or temperature. For example, one or more infrared sensors can be mounted near to the cameras within the module to measure the temperature of the user’s body. Several IR sensors may be required located at different positions within the module to provide a full 360 temperature image of the subject for superposition onto the 3D image, however this is not required.
Measured changes over time can be displayed numerically, graphically (for example as shown in the heat maps in Figure 9 and the overlays in Figure 11) and visually via the mobile app, enabling individuals to see their progress. Equally such data can be stored and accessed via an online information service, so that individuals can share progress with a personal trainer or to share outcomes with friends on Social Media. Figure 9 shows a 3D model of a subject with heat maps overlaid. These indicate which areas of the body have reduced or increased in size between scans. In Figure 8, areas where a change in measurement has occurred between the two scans are shown in red or blue (blue for a reduction in size and red for an increase). This can provide a good way for the subject to visualise changes in their body shape and highlight areas in which an improvement is seen or which might need attention. Figure 10 shows some examples of statistical data derived from 3D models and complementary data collected in the module. This data is displayed in the form of line graphs and histograms, as might appear on the screen of a device when a person uses their app.
Complimentary data will be stored with the rest of the user’s information in order to allow a fuller picture of changes in health and body structure to be built up. Given that the 3D models produced also allow for volume measurements rather than simply indications of height and weight a BVI (Body Volume Index) can be calculated using the volume and weight measurements. Data from the scanning module can also be complemented by data from other devices worn by the user either at the time of the scan or at a different time. A heart rate monitor, for example, might be linked wirelessly to the module so that measurements (along with the time at which measurements are recorded) can be stored with the raw scan data. Alternatively the user can be prompted to enter such data if available or to physically connect a device to the processor within the module in order to transfer data to it. Other information, such as the age and gender of a user or details of their lifestyle can be entered and stored with the rest of the data. This can be done only once, but if lifestyle information is recorded over time it will be possible to correlate certain changes with changes in the users appearance derived from the scans.
When a user wishes to access their data they will either select the app on their user device or load up the relevant web page, at which point they will be prompted to enter login details. They can also be given the option to provide personal details via the web or app, or to add payment details which, once given, will allow them access to their scans and related data. The option of connecting with friends over the internet can allow users to give each other access to their own data or to send data to friends which will allow comparisons to be made between scans for different users. The scan views can be interactive to allow the user to zoom in on parts of the scan or to rotate the image as desired. Certain settings can also allow a user to receive notifications once a certain goal has been met, such as reaching a particular waist measurement, weight or BVI.
Information can be displayed to the user in various ways. Overlapping images of the same part or the same cross section of a user’s body at different times may be displayed in order to accentuate changes. Changes can be highlighted or colour coded. An image might show, for example, a number of cross sections of a user’s waist extracted from scans taken each month over a period of a year. These may be displayed on top of one another with different scans shown in different colours. Rather than cross sectional views, front or side views might be displayed in the same manner. Figure 11 shows both a side and back view of a user with two scans taken at different times and displayed simultaneously. Dates may be shown below the scans and can be colour coded to match the corresponding image. This information can help a user to identify areas in which they need to improve and can aid them during discussions with trainers or during subsequent workouts.
Figure 12 shows an example of a screen that may be visible to a user on selecting to view their scan data. The 3D scan (which may be rotatable about all three axes) is shown on the left and is in the form of an avatar of the subject. Some representative data is shown on the right of the image. As mentioned, it is possible to adjust the avatar digitally to different positions or change the shape of the avatar after the image has been taken. In embodiments, clicking on the measurements on the right of the screen will cause markings to appear on the 3D image to represent the selected measurement. This allows the user to visualise the data more easily. The clay colour scheme is shown here as an example but 3D images may be shown in any colour, for example in real colour or in a colour chosen by the user.
Embodiments of the present invention have been described with particular reference to the examples illustrated. However, it will be appreciated that variations and modifications may be made to the examples described within the scope of the present invention.

Claims (41)

Claims
1. A system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image; the system being arranged to communicate with a server remote from the scanning module, for receiving the raw data and/or the part-processed data to complete processing and thereby generate the 3D digital model.
2. The system of claim 1, wherein each point on the surface of the scanning volume is within the field of view of three sensors.
3. The system of any of claims 1 and 2, further comprising an app downloadable onto a user device and/or cloud based platform to allow a user access to their processed images and digital 3D model from the server.
4. The system of any of claims 1 to 3, wherein processing the images further comprises calculating measurements and statistics from the 3D digital model.
5. The system of any of claims 1 to 4, wherein processing the images further comprises analysing images to assess skin conditions.
6. The system of any of claims 1 to 5, wherein the sensors comprise optical digital cameras.
7. The system of any of claims 1 to 6, wherein the system is arranged to operate in response to an initial user input, without further user interaction.
8. The system of any of claims 1 to 7, wherein the raw data is captured by all sensors at substantially the same time.
9. The system of any of claims 1 to 8, wherein the scanning module further comprises a system for illuminating the subject during image capture.
10. The system of claim 9, wherein at least two images are taken for each scan, wherein during one image capture event the subject is illuminated and during another image capture event a pattern is projected onto the subject.
11. The system of any of claims 9 or 10, wherein at least two images are taken for each scan, one of which is taken through a polarizing filter and the other through a filter having a different polarization or without a polarizing filter.
12. The system of any of claims 9 to 11, wherein the system for illuminating the subject comprises one or more LED lightbulbs.
13. The system of any of claims 1 to 12, further comprising a positioning system for ensuring that the subject is within the scanning volume and in a desired pose.
14. The system of claim 13, wherein the positioning system comprises one or more handles for the subject to hold onto during the scan.
15. The system of any of claims 13 and 14, wherein the positioning system comprises a live video of the subject shown on one or more screens overlaid with an outline showing the desired pose.
16. The system of any of claims 1 to 15, wherein the system further comprises a device for measuring and recording information relating to the weight and/or biometric data of the subject.
17. The system of claim 16, wherein the information relating to the weight and/or biometric data of the subject is stored on the server.
18. The system of any of claims 1 to 17, wherein the system further comprises means to measure and visualise changes to the 3D data using heat maps and comparison images of some or all of the surface of the subject.
19. The system of claim 18, wherein the heat maps and comparison images of the subject’s body are stored on the server.
20. The system of any of claims 1 to 19, wherein the digital 3D models are digitally altered during processing to correct posture and/or to account for breathing.
21. The system of any of claims 1 to 20, comprising a server remote from the scanning module for receiving the raw data and/or the part-processed data to complete processing and thereby generate the 3D digital model.
22. A system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a projector for projecting a pattern onto the subject during data capture, the pattern comprising a light coloured image overlaid with a dark coloured image; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
23. The system of claim 22, wherein the light coloured image and the dark coloured image each comprise a plurality of smaller patterns that are repeated.
24. The system of claim 23, wherein the repeated patterns comprise grids.
25. The system of claim 24, wherein the grids are square grids.
26. The system of any of claims 24 and 25, wherein the light and dark coloured grids are of the same size.
27. The system of any of claims 22 to 26, wherein the light and dark coloured images are offset from each other.
28. The system of claim 27, wherein the light and dark coloured images are offset from each other in two orthogonal directions.
29. A system for producing a 3D digital model of a subject, the system comprising: a scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject located within a scanning volume, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the respective sensor; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
30. The system of claim 29, wherein the distance from at least half of the sensors to the surface of the scanning volume is less than the focus distance of the respective sensors.
31. The system of claim 30, wherein the sensors are positioned such that the distance from each of the sensors to the surface of the scanning volume is less than the focus distance of the respective sensor.
32. A method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; part-processing the raw data in the scanning module in real time to produce an initial image, wherein the processing comprises producing a 3D digital model from the raw data; communicating the raw data and/or part-processed data to a server remote from the scanning module for completion of processing of the data to generate the 3D digital model.
33. A method according to claim 32, comprising at the remote server, processing the raw data and/or part processed data to generate the 3D digital model.
34. A method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; projecting a pattern onto the subject during data capture, the pattern comprising a light coloured image overlaid with a dark coloured image; processing the raw data to generate a 3D digital model from the raw data.
35. A method for producing a 3D digital model, the method comprising: locating a subject within a scanning volume within a scanning module, the scanning module having a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the sensor; processing the raw data to generate a 3D digital model from the raw data.
36. A scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image and being arranged to upload the initial image to a server for completion of processing to thereby generate the 3D digital model.
37. A scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors; a projector for projecting a pattern onto the subject during data capture, the pattern comprising a light coloured image overlaid with a dark coloured image; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
38. A scanning module for producing a 3D digital model of a subject in a scanning volume, the scanning module comprising: the scanning volume; a plurality of static sensors for capturing raw data representing unprocessed images of the subject, wherein each point on the surface of the scanning volume is within the field of view of at least two sensors, and wherein the distance from at least one sensor to the surface of the scanning volume is less than the focus distance of the sensor; a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data.
39. A system for producing a 3D digital model, the system being substantially as shown in and/or described with reference to any one or more of Figures 1 to 12 of the accompanying drawings.
40. A method for producing a 3D digital model, the method being substantially as shown in and/or described with reference to any one or more of Figures 1 to 12 of the accompanying drawings.
41. A scanning module for producing a 3D digital model, the scanning module being substantially as shown in and/or described with reference to any one or more of Figures 1 to 12 of the accompanying drawings.
GB1519463.2A 2015-11-04 2015-11-04 A system, method and scanning module for producing a 3D digital model of a subject Withdrawn GB2544268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1519463.2A GB2544268A (en) 2015-11-04 2015-11-04 A system, method and scanning module for producing a 3D digital model of a subject

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1519463.2A GB2544268A (en) 2015-11-04 2015-11-04 A system, method and scanning module for producing a 3D digital model of a subject

Publications (2)

Publication Number Publication Date
GB201519463D0 GB201519463D0 (en) 2015-12-16
GB2544268A true GB2544268A (en) 2017-05-17

Family

ID=55130650

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1519463.2A Withdrawn GB2544268A (en) 2015-11-04 2015-11-04 A system, method and scanning module for producing a 3D digital model of a subject

Country Status (1)

Country Link
GB (1) GB2544268A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020216820A1 (en) * 2019-04-26 2020-10-29 FotoFinder Systems GmbH Apparatus for producing a whole-body image
US20220114734A1 (en) * 2020-10-14 2022-04-14 Shutterfly, Llc System for background and floor replacement in full-length subject images

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019105015A1 (en) * 2019-02-27 2020-08-27 Peri Gmbh Construction of formwork and scaffolding using mobile devices
KR102323328B1 (en) * 2019-09-17 2021-11-09 주식회사 날마다자라는아이 System for measuring growth state of child using smart scale

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1583371A (en) * 1976-03-17 1981-01-28 Wurth Anciens Ets Paul Method and apparatus for determining the three-dimensional surface profile of the charge of a furnace
US20050058336A1 (en) * 2003-09-16 2005-03-17 Russell Gerald S. Measuring the location of objects arranged on a surface, using multi-camera photogrammetry
WO2005073801A1 (en) * 2004-01-31 2005-08-11 Openvr Co., Ltd. 3 dimensional image generator with fixed camera
US20120307021A1 (en) * 2011-05-30 2012-12-06 Tsai Ming-June Dual-mode optical measurement apparatus and system
EP2561810A1 (en) * 2011-08-24 2013-02-27 Université Libre de Bruxelles Method of locating eeg and meg sensors on a head
US20140341484A1 (en) * 2013-05-20 2014-11-20 Steven Sebring Systems and methods for producing visual representations of objects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1583371A (en) * 1976-03-17 1981-01-28 Wurth Anciens Ets Paul Method and apparatus for determining the three-dimensional surface profile of the charge of a furnace
US20050058336A1 (en) * 2003-09-16 2005-03-17 Russell Gerald S. Measuring the location of objects arranged on a surface, using multi-camera photogrammetry
WO2005073801A1 (en) * 2004-01-31 2005-08-11 Openvr Co., Ltd. 3 dimensional image generator with fixed camera
US20120307021A1 (en) * 2011-05-30 2012-12-06 Tsai Ming-June Dual-mode optical measurement apparatus and system
EP2561810A1 (en) * 2011-08-24 2013-02-27 Université Libre de Bruxelles Method of locating eeg and meg sensors on a head
US20140341484A1 (en) * 2013-05-20 2014-11-20 Steven Sebring Systems and methods for producing visual representations of objects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020216820A1 (en) * 2019-04-26 2020-10-29 FotoFinder Systems GmbH Apparatus for producing a whole-body image
US20220114734A1 (en) * 2020-10-14 2022-04-14 Shutterfly, Llc System for background and floor replacement in full-length subject images

Also Published As

Publication number Publication date
GB201519463D0 (en) 2015-12-16

Similar Documents

Publication Publication Date Title
US8217993B2 (en) Three-dimensional image capture system for subjects
CN103648366B (en) For the system and method for remote measurement optical focus
US8718748B2 (en) System and methods for monitoring and assessing mobility
US8654198B2 (en) Camera based interaction and instruction
US20120206587A1 (en) System and method for scanning a human body
Payton et al. Motion analysis using video
AU2013260160B2 (en) System and apparatus for automated total body imaging
GB2544268A (en) A system, method and scanning module for producing a 3D digital model of a subject
KR20070092007A (en) Apparatus for three dimensional scanning
JP2012511707A (en) Method and apparatus for monitoring an object
US8103088B2 (en) Three-dimensional image capture system
CN111028341A (en) Three-dimensional model generation method
CN112304222A (en) Background board synchronous revolution's 3D information acquisition equipment
CN108334853A (en) A kind of head face 3D 4 D data harvesters
US11877717B2 (en) Method and apparatus for detecting scoliosis
CN110973763B (en) Foot intelligence 3D information acquisition measuring equipment
CN211178345U (en) Three-dimensional acquisition equipment
JP5417654B2 (en) Center of gravity analysis method
US20210267463A1 (en) System and method for automated thermographic examination
CN211375621U (en) Iris 3D information acquisition equipment and iris identification equipment
CN211085114U (en) Take 3D information acquisition equipment of background board
WO2015174885A1 (en) Method for constructing a three-dimensional color image and device for the implementation thereof
KR200467210Y1 (en) Apparatus For Taking Image Data Of Face
KR20110032927A (en) A fitness apparutus, a fitness system and fitness method
RU2618980C1 (en) Method of controlling changes in facial skin defects and devicre for its implementation

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20190523 AND 20190529

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)