CN103105927A - Information processing apparatus, display control method, and program - Google Patents

Information processing apparatus, display control method, and program Download PDF

Info

Publication number
CN103105927A
CN103105927A CN2012104320512A CN201210432051A CN103105927A CN 103105927 A CN103105927 A CN 103105927A CN 2012104320512 A CN2012104320512 A CN 2012104320512A CN 201210432051 A CN201210432051 A CN 201210432051A CN 103105927 A CN103105927 A CN 103105927A
Authority
CN
China
Prior art keywords
subject
action
clothes
changes
detects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012104320512A
Other languages
Chinese (zh)
Inventor
铃木诚司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN103105927A publication Critical patent/CN103105927A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

There is provided an information processing apparatus including: an operation detecting unit detecting an operation of a subject that has been captured, and a display control unit changing a worn state of at least one of virtual clothing or accessories displayed overlaid on the subject in accordance with the operation detected by the operation detecting unit.

Description

Signal conditioning package, display control method and program
Technical field
The disclosure relates to signal conditioning package, display control method and program.
Background technology
Proposed to take the various technology that generate dressing image (be try-on garment etc. image) on the image that the user produces as virtual assembling system by clothes image is superimposed upon.
For example, TOHKEMY discloses for No. 2006-304331 and a kind of clothes image has been superimposed upon method on the user's body image.More particularly, in No. 2006-304331, JP disclosed image processing server according to such as health in the physical shapes data of adding the user's body image to (height, shoulder breadth etc.) and image towards etc. information change the size of clothes image, and regulate image towards, then clothes image is superimposed upon on body image.
Summary of the invention
Adopt disclosed dressing image generation technique in No. 2006-304331, JP, just by varying sized and regulate towards regulating the clothes image that will be applied, and the collar of clothes image or the state of sleeve predetermine, and this means that being difficult to that clothes image is carried out the part changes.Yet when the actual try-on garment of user, also requirement can be attempted according to user's preference the different conditions (hereinafter being referred to as " wearing state ") of collar or sleeve.
For this reason, purpose of the present disclosure is to provide and can changes according to the action that subject is carried out novel and improved signal conditioning package, display control method and the program of wearing state.
According to the disclosure, a kind of signal conditioning package is provided, described signal conditioning package comprises: motion detection unit, detect the action of the subject that has been taken; And indicative control unit, the action that detects according to described motion detection unit changes the virtual clothes that shows on described subject and/or the wearing state of accessories of overlapping.
According to the disclosure, a kind of display control method is provided, described display control method comprises: detect the action of the subject that has been taken; And in response to the action that detects, change the virtual clothes that shows on described subject and/or the wearing state of accessories of overlapping.
According to the disclosure, provide a kind of computing machine that makes to carry out the following program of processing: to detect the processing of the action of the subject that has been taken; And according to the action that detects, change the processing of the wearing state overlap the virtual clothes that shows on described subject and/or accessories.
According to above-mentioned embodiment of the present disclosure, can change the wearing state according to the action of subject.
Description of drawings
Fig. 1 is for the diagram of explanation according to the AR dressing system outline of an embodiment of the present disclosure;
Fig. 2 is the block diagram that illustrates according to the formation of the signal conditioning package of an embodiment of the present disclosure;
Fig. 3 is for the camera of explanation real space and the position relationship between subject, and the diagram of having taken the pickup image of subject;
Fig. 4 is for the diagram of explanation according to the bone information of an embodiment of the present disclosure;
Fig. 5 is for the explanation virtual camera of Virtual Space and the position relationship between virtual clothes, and the diagram by virtual clothes image that virtual clothes projection is produced;
Fig. 6 illustrates according to the basic display that is used for demonstration AR dressing image of an embodiment of the present disclosure to control the process flow diagram of processing;
Fig. 7 be illustrate according to an embodiment of the present disclosure for control the process flow diagram of processing according to the wearing state of posture;
Fig. 8 is for the control example 1 of explanation according to the wearing state of the effective posture of basis of an embodiment of the present disclosure;
Fig. 9 is for the control example 4 of explanation according to the wearing state of the effective posture of basis of an embodiment of the present disclosure;
Figure 10 assembling system for explanation according to the AR of an embodiment of the present disclosure, and camera position has become the diagram of the situation that is in the subject rear;
Figure 11 is for illustrating that the AR according to an embodiment of the present disclosure assembling system, with the diagram of virtual clothes situation of overlapping demonstration on the subject that 3D shape has been reconstructed.
Embodiment
Hereinafter, describe with reference to the accompanying drawings preferred embodiment of the present disclosure in detail.What note is in this instructions and accompanying drawing, to represent substantially to have the structural detail of identical function with identical Reference numeral, and omit the repeat specification to these structural details.
Carry out following description with the order shown in following.
1. the overview of assembling system according to the AR of an embodiment of the present disclosure
2. the structure of signal conditioning package
3. show and control
The 3-1 basic display is controlled
3-2 is according to the control of posture to the wearing state
3-3 is from the demonstration of objective viewpoint
4. conclusion
1. the overview of assembling system according to the AR of an embodiment of the present disclosure
In recent years, so-called augmented reality (AR:augmented reality) technology receives publicity, and this technology is by with additional information and real world is overlapping presents this information to the user.Become visual to the information utilization that the user presents as various forms of virtual objects such as text, icon and animations by the AR technology.One of main application of AR technology is the User Activity of supporting in real world.In the following description, the AR technology is being applied to assembling system (that is, the system of try-on garment etc.).
By as one man showing with user action the virtual clothes image that overlaps on health, use the AR technology assembling system make the user can be with the real-time try-on garment of virtual mode.In addition, assembling system according to the AR of an embodiment of the present disclosure and can change according to user action the wearing state of virtual clothes, thereby the interactive virtual dressing cubicle is provided.By this way, the user can be with larger degree of freedom try-on garment, and can enjoy the form-fitting garments of virtually trying.
Now, with reference to Fig. 1, the overview of assembling system according to the AR of an embodiment of the present disclosure is described.As shown in fig. 1, assembling system 1 according to the AR of an embodiment of the present disclosure and comprise signal conditioning package 10, camera 15, sensor 17 and display device 19.What note is the not special restriction in position of assembling system 1 to AR is set.For example, AR assembling system 1 and can be arranged in user's family, perhaps can be arranged in retail shop.
In addition, in example shown in Figure 1, although consist of the discrete device that a plurality of devices (that is, signal conditioning package 10, camera 15, sensor 17 and display device 19) that AR assembling system 1 are constructed to, the structure of assembling system 1 according to the AR of the present embodiment is not limited to this.For example, consist of the combination in any that AR a plurality of devices of assembling system 1 and can be integrated into single assembly.As another example, consist of a plurality of devices that AR assembling system 1 and can be built into smart phone, PDA(personal digital assistant), in mobile phone, Mobile audio frequency transcriber, moving image processing apparatus or mobile game machine.
The image of the object that exists in camera (image-pickup device) 15 picked-up real spaces.Although the object that exists in real space is had no particular limits, as an example, this object can be as biologies such as human or animals, and is perhaps as abiotic in garage or cabinet for TV etc.In example shown in Figure 1, subject A(for example, the people) taken by camera 15 as the object that exists in real space.Image (hereinafter also referred to as " pickup image ") by camera 15 picked-ups is displayed on display device 19.The pickup image that shows on display device 19 can be the RGB image.In addition, camera 15 sends pickup image to signal conditioning package 10.
Sensor 17 has the function that detects from the parameter of real space, and the data that detect are sent to signal conditioning package 10.For example, if sensor 17 is made of infrared sensor, sensor 17 can detect the infrared waves from real space, and the electric signal that will be consistent with the infrared amount that detects offers signal conditioning package 10 as the data that detect.For example, signal conditioning package 10 can be based on the object that exists in the data identification real space that detects.The type of sensor 17 is not limited to infrared sensor.What note is that although the data that detect in example shown in Figure 1 offer signal conditioning package 10 by sensor 17, the data that detect that offer signal conditioning package 10 can be the images by camera 15 picked-ups.
Signal conditioning package 10 can be for example by being superimposed upon virtual objects on pickup image with conforming to and/or reinventing (reshape) pickup image with recognition result to the object that exists in real space, process pickup image.Display device 19 can also show the image of being processed by signal conditioning package 10.
For example, as shown in fig. 1, signal conditioning package 10 can be identified the subject A in real space, and shows in real time the dressing image of the clothes image that superposeed on display device 19.In this example, user's health is the image of real space, and the clothes image that will be tried on is the virtual objects that shows on the image of real space by superimposed.By this way, AR assembling system 1 the virtual dressing chamber is provided in real time.
The function that has the action that detects subject A according to the signal conditioning package 10 of an embodiment of the present disclosure.By this way, signal conditioning package 10 can change the clothes image that will be superimposed upon on pickup image according to the action that detects, thereby changes the wearing state.By showing on display device 19 that in real time the wearing state according to the AR dressing image that the action of subject A changes, can provide the interactive virtual dressing cubicle.
2. the structure of signal conditioning package
Then, with reference to Fig. 2, the structure of realizing according to the AR of the present embodiment of the present disclosure the signal conditioning package 10 of assembling system is described.As shown in Figure 2, signal conditioning package 10 comprises control module 100, operation input block 120 and storage unit 130.Control module 100 comprises bone position calculation unit 101, motion detection unit 103 and indicative control unit 105.Signal conditioning package 10 also is connected with display device with camera 15, sensor 17 by wireless or cable and is connected.
Control module 100 is corresponding to as the CPU(CPU (central processing unit)) or the DSP(digital signal processor) etc. processor.By carrying out the program of storing in storage unit 130 or other storage medium, control module 100 is realized the various functions of the control module 100 of description subsequently.What note is, each functional block that consists of control module 100 can all be mounted in same device, or some in these functional blocks can be mounted in another device (as, server).
Storage unit 130 is used as storage mediums such as semiconductor memory or hard disks, stores program and the data processed for by signal conditioning package 10.For example, storage unit 130 storages make the program of the function of computing machine performance control module 100.For example, storage unit 130 also may be stored the data that will be used by control module 100.According to the storage unit 130 storage clothes of the present embodiment and/or the three-dimensional data of accessories, material information and the dimension information that is associated with clothes and/or accessories, as the virtual objects that will be shown.What note is that in this manual, term " clothes and/or accessories " can comprise clothes and accessories.Here, term " accessories " comprises glasses, cap, belt etc.
Operation input block 120 comprises: input equipment and input control circuit etc., wherein, input equipment is for making the input equipment that the user can input message as mouse, keyboard, touch panel, one or more button, microphone, one or more switch, one or more control lever or remote controllers etc.; Input control circuit generates input signal based on the input that the user carries out, and it is outputed to control module 100.By operation input block 120 is operated, the user can and be cut off the power supply to signal conditioning package 10 energisings, and can send as instructions such as starting AR dressing system programs.
Camera 15(image-pickup device) by using as the CCD(charge-coupled image sensor) or the CMOS(complementary metal oxide semiconductor (CMOS)) etc. image pickup element shooting real space generate pickup image.Although supposition camera 15 and the discrete structure of signal conditioning package 10 in the present embodiment of the present disclosure, camera 15 can be also the part of signal conditioning package 10.
Camera 15 also supplies to control module 100 camera 15 that uses during image capture configuration information.Fig. 3 is the diagram of the pickup image A ' that generates for camera 15 and the position relationship between subject A of explanation real space with by shooting subject A.For convenience of explanation, in Fig. 3, illustrate in the same side of subject from as the principal point of the optical centre of the lens (not shown) of camera 15 focal distance f to the image pickup element (not shown equally) of camera 15 RealIt has the xyz coordinate with the subject A(that generates on image pickup element, for three-dimensional) pickup image A ' (it has the xy coordinate, for two dimension).As after describe, from camera 15 to subject A apart from d RealCalculated as depth information.Mainly according to focal distance f RealDetermine the view angle theta of camera 15 RealAs the example of the configuration information of camera 15, camera 15 is to signal conditioning package 10 supply focal distance f Real(or view angle theta Real) and the resolution (that is, pixel count) of pickup image A '.
Sensor 17 has from the function of real space detected parameters.For example, if sensor 17 is made of infrared sensor, sensor 17 can detect infrared ray and the electric signal that will be consistent with the infrared amount that detects offers signal conditioning package 10 as the data that detect from real space.The type of sensor 17 is not limited to infrared sensor.What note is, offers signal conditioning package 10 if the image of camera 15 picked-ups is used as the data that detect, and does not need to arrange sensor 17.
Display device 19 is by the LCD(liquid crystal display), the OLED(Organic Light Emitting Diode) or the CRT(cathode-ray tube (CRT)) etc. the display module that consists of.Although the formation of imagination display device 19 and signal conditioning package 10 discrete structures in the present embodiment of the present disclosure, display device 19 can be also the part of signal conditioning package 10.
Then, will the functional configuration of above-mentioned control module 100 be described.As describing subsequently, control module 100 comprises bone position calculation unit 101, motion detection unit 103 and indicative control unit 105.
Bone position calculation unit 101
Bone position calculation unit 101 is calculated the bone position of the health that manifests in pickup image based on the data that detect.Method for the bone position in the real space that calculates the object that manifests in pickup image has no particular limits.For example, bone position calculation unit 101 is recognition object existing zone (also referred to as " object domain of the existence ") in pickup image at first, and obtains the depth information of the object in pickup image.Then, bone position calculation unit 101 can be based on the degree of depth and the shape (characteristic quantity) of object domain of the existence, position (head, left shoulder, right shoulder, trunk etc.) in the real space of the object that identification manifests in pickup image, and the center of calculating each position is as the bone position.Here, bone position calculation unit 101 can be used the characteristic quantity dictionary of storage in storage unit 130, the characteristic quantity that will determine according to pickup image with in the characteristic quantity dictionary in advance the characteristic quantity at each position of the object of registration contrast, thereby the position of the object that the identification pickup image comprises.
Can expect the whole bag of tricks for the recognition object domain of the existence.For example, offer signal conditioning package 10 if pickup image is used as the data that detect, bone position calculation unit 101 can be based on object the pickup image before manifesting and the difference that shows between the pickup image of object come the recognition object domain of the existence.In more detail, bone position calculation unit 101 before object can being manifested pickup image and show the zone that difference between the pickup image of object surpasses threshold value and be identified as the object domain of the existence.
As another example, if being used as the data that detect, the parameter that is detected by sensor 17 offers signal conditioning package 10, bone position calculation unit 101 can be based on the data identification object domain of the existence that detects.In more detail, bone position calculation unit 101 zone that the infrared amount that detects can be surpassed threshold value is identified as the object domain of the existence.
Can expect the whole bag of tricks for the depth information of the object that obtains pickup image.For example, can pre-determine distance between camera 15 and object.That is to say, restriction can be set, object is positioned at and the be separated by position of predetermined distance of camera 15.If be provided with this restriction, bone position calculation unit 101 can (for example, 2m) be processed the depth information (referring to the distance between camera 15 and object here) of object as fixed value.
Bone position calculation unit 101 can also based on the detected parameter of sensor 17, be calculated the depth information of the object in pickup image.In more detail, if bone position calculation unit 101 is sent the light as infrared ray etc. from the irradiation unit (not shown) towards object, can calculate by the light that analyte sensors 17 detects the depth information of the object in pickup image.
As another example, the phase delay of the light that bone position calculation unit 101 can detect based on sensor 17 is calculated the depth information of the object in pickup image.This method is called as TOF(Time Of Flight sometimes: time-of-flight method).Perhaps, if be made of known pattern from the light of irradiation unit (not shown) emission, bone position calculation unit 101 can consist of by analysiss the degree of distortion of the pattern of the light that sensor 17 detects, the depth information of the object in the calculating pickup image.
What note is that the image-pickup device with this function of depth information of calculating the object in pickup image is called as depth camera, and can be realized by stereoscopic camera or laser ranging scanner.Bone position calculation unit 101 can be obtained depth information from the depth camera that is connected with signal conditioning package 10.
The degree of depth and shape (characteristic quantity) based on the object domain of the existence that obtains by said method, position (head, shoulder etc.) in the real space of the object that manifests in bone position calculation unit 101 identification pickup images, and calculate the bone position at each position.The below describes with reference to Fig. 4 the bone information that bone position calculation unit 101 calculates, and this bone information comprises the bone position at least one position that consists of subject A.
Fig. 4 is the diagram for explanation bone information.Although provide coordinate B1 to B3, B6, B7, B9, B12, B13, B15, B17, B18, B20 to B22 and the B24 of the position at 15 positions that represent formation subject A, example as the bone information in Fig. 4, but the quantity at the position that comprises for bone information has no particular limits.
What note is, coordinate B1 represents the coordinate of " head ", and coordinate B2 represents the coordinate of " neck ", and coordinate B3 represents the coordinate of " trunk ", and coordinate B6 represents the coordinate of " right shoulder ", and coordinate B7 represents the coordinate of " right elbow ".In addition, coordinate B9 represents the coordinate of " right hand ", and coordinate B12 represents the coordinate of " left shoulder ", and coordinate B13 represents the coordinate of " left elbow ", and B15 represents the coordinate of " left hand ".
Coordinate B17 represents the coordinate of " right stern ", and coordinate B18 represents the coordinate of " right knee ", and coordinate B20 represents the coordinate of " right crus of diaphragm ", and coordinate B21 represents the coordinate of " left stern ".Coordinate B22 represents the coordinate of " left knee " and the coordinate that coordinate B24 represents " left foot ".
Obtain as described earlier the depth information of the object in pickup image according to the bone position calculation unit 101 of the present embodiment, and as object lesson, can obtain depth information as the deep or light pickup image (not shown) that changes according to the degree of depth from above-mentioned depth camera.
Motion detection unit 103
Motion detection unit 103 calculates based on bone position calculation unit 101 bone position is detected action over time, and if made effective posture, and the effective posture that arrives to indicative control unit 105 output detections.The action that motion detection unit 103 will detect and pre-stored posture DB(database in storage unit 130) in the registration posture compare, whether the action that detects with judgement is effective posture.For example, subject A is superimposed the outside mobile action in the position that subject A shows is registered in posture DB, as the effective posture that grasps and pull clothes from virtual clothes with his/her hand.As another example, subject moves a hand from the another side wrist towards ancon action is registered in posture DB, as the effective posture that rolls up one's sleeve.
What note is, can by describe before based on bone information motion detection or by being commonly referred to another technology of " moving camera shooting (motion capture) ", realize the detection to the action of the subject in real space (for example, people).For example, the parameter that motion detection unit 103 can detect based on the acceleration transducer that is installed to the subject joint etc. detects the action of subject.Motion detection unit 103 can also detect action by the movement that detection is attached to the mark of subject.
Indicative control unit 105
Indicative control unit 105 is carried out and is controlled, the AR dressing image that is shown on the superimposed subject that manifests in pickup image of generating virtual clothes and demonstration AR dressing image on display device 19.Can change the wearing state in AR dressing image according to the action (that is, effective posture) that motion detection unit 103 detects according to the indicative control unit 105 of the present embodiment.More specifically, indicative control unit 105 can provide interactive dressing cubicle, and wherein the part or all of wearing state of virtual clothes is according to the position (coordinate) of the posture (that is, the timing variations of coordinate) of subject and this posture and change.
Here, with reference to Fig. 5, the generation that will overlap the virtual clothes on pickup image is described.Fig. 5 is for virtual camera 25 and the position relationship between virtual clothes C of explanation Virtual Space with by virtual clothes C being carried out the diagram of the virtual clothes image C ' (also referred to as " virtual image ") that projection (having an X-rayed (rendering)) generates.In Fig. 5, adopt the identical mode of pickup image A ' that generates with shooting real space shown in Figure 3, at the homonymy of virtual clothes, the virtual clothes image C ' that has an X-rayed is shown.
Determine the setting (inner parameter) of virtual camera 25 according to the setting (inner parameter) of the camera 15 of taking real space.For example, term " setting of camera (inner parameter) " can refer to focal distance f, angle θ and pixel count.Camera control unit 105 arranges the setting of virtual camera 25, mates (this is processed also referred to as " initialization ") with the camera 15 with real space.
Then, based on the depth information of the object in pickup image, indicative control unit 105 is arranged on virtual clothes C and virtual camera 25 standoff distance d according to the bone position of subject VirtualThe position, apart from d VirtualWith from camera 15 to real space subject A apart from d RealIdentical.Indicative control unit 105 can based in advance the three-dimensional data of modeling come generating virtual clothes C.As shown in Figure 5, for example, the surface that indicative control unit 105 passes through with the polygon set constructing virtual clothes C of triangle can show in mode more true to nature the 3D shape of virtual clothes.If the bone position of subject A changes in time, indicative control unit 105 can change the position of virtual clothes C, to follow the trail of the bone position.
Then, indicative control unit 105 is about to three-dimensional clothes C projection to generate two-dimensional image by using virtual camera 25 to have an X-rayed, and obtains clothes image C ' (or " virtual image ").Subsequently, indicative control unit 105 can generate AR dressing image by the virtual clothes image C ' that demonstration overlaps on pickup image A ' (referring to Fig. 3).What note is next, will describe in more detail the demonstration of 105 pairs of AR dressing images of indicative control unit and control in " 3. display control section ".
So just completed realizing that AR according to the present embodiment of the present disclosure the detailed description of structure of the signal conditioning package 10 of assembling system.Then, the demonstration of 10 pairs of AR dressing images of descriptor treating apparatus is controlled.
3. show and control
The basic demonstration of 3-1 is controlled
Fig. 6 is that the process flow diagram of processing is controlled in the basic demonstration to AR dressing image that signal conditioning package 10 execution are shown.As shown in Figure 6, at first, in step S110, indicative control unit 105 is carried out initialization, and the setting of the virtual camera 25 in the Virtual Space and the setting of the camera in real space 15 are complementary.
Then, in step S113, bone position calculation unit 101 is calculated the bone position (xyz coordinate) of the subject A in the real space that has been taken, and the bone position is outputed to motion detection unit 103 and indicative control unit 105.
After this, in step S116, indicative control unit 105 arranges virtual clothes C in the Virtual Space according to the bone position (xyz coordinate) of subject A.
Then, in step S119, indicative control unit 105 is controlled (AR shows control), virtual clothes C is had an X-rayed to obtain clothes image C ' (virtual image), clothes image C ' is superimposed upon pickup image A ' upward draws AR dressing image, and show pickup image A ' on display device 19.
In step S122, signal conditioning package 10 repeated execution of steps S113 to S119 are until receive command for stopping.By this way, signal conditioning package 10 can provide in real time and follow the trail of the AR dressing image that subject A moves.
So just completed the description of processing has been controlled in basic demonstration.In addition, the signal conditioning package 10 according to the present embodiment can change according to the action of subject A the wearing state of virtual clothes.Now, with reference to Fig. 7 describe in detail according to the present embodiment according to posture to dressing the control of state.
3-2 is according to the control of posture to the wearing state
Fig. 7 illustrates the process flow diagram of the control of dressing state being processed according to posture of carrying out according to the signal conditioning package 10 of the present embodiment.The performed control to the wearing state of a is controlled in step S116 shown in processing list diagram 6 shown in Fig. 7 and the demonstration in S119.
At first, in the step S116 in Fig. 7, in the mode identical with the processing in the same step shown in Fig. 6, virtual clothes C is arranged in the Virtual Space with the bone position of subject A with conforming to.Then, in step S119, in the mode identical with the processing in the same step shown in Fig. 6, overlap the upper demonstration of pickup image A ' by having an X-rayed the clothes image C ' that obtains to virtual clothes C, and show basic AR dressing image on display device 19.
Then, in step S125, motion detection unit 103 detects posture (action) based on the timing variations of the bone position (coordinate) of hand.
After this, in step S128, whether the posture that motion detection unit 103 judgements detect is effective posture.
In step S131, indicative control unit 105 is subsequently according to controlling the wearing state by motion detection unit 103 detections for the posture of effective posture.The control to dressing state like this can change virtual clothes C in three dimensions (Virtual Space) partly or entirely, perhaps can change virtual clothes image C ' in the two dimensional image (virtual image) that obtains by perspective partly or entirely.
Effective posture of before describing and can making up by various modes of suspecting the control (that is, changing) of dressing state.Now, will be by a plurality of examples, describe the effective posture of basis according to an embodiment of the present disclosure in detail to dressing the control of state.
The control example 1 of wearing state
Fig. 8 is according to the effective posture of basis of the present embodiment diagram for the control example 1 of the state of wearing for explanation.What note is that the left side of Fig. 8 is comprised of the transition diagram of the image that the bone information with pickup image and subject superposes.Motion detection unit 103 detects action based on the timing variations of the bone position shown in Fig. 8 left part.The right side of Fig. 8 is comprised of the transition diagram of the AR dressing image that indicative control unit 105 shows on display device 19.Indicative control unit 105 is that calculate based on bone position calculation unit 101, as in the bone position shown in Fig. 8 left part, show the virtual clothes that overlaps on subject.Indicative control unit 105 is that detect according to motion detection unit 103, movement as shown in Fig. 8 left part, changes the dressing state of virtual clothes.
As shown in the transition diagram of the bone position of Fig. 8 left part, if the coordinate B15(left hand of subject) position to the outside changes chronologically in position that virtual clothes shows from superimposed, motion detection unit 103 is judged to be the effective posture that grasps and pull clothes.In this case, indicative control unit 105 changes the part (more specifically, indicative control unit 105 moves on to the outside with the Partial Feature point of virtual clothes C) of virtual clothes C according to the action of subject as shown in the transition diagram of the AR dressing image of Fig. 8 right part.By this way, can change the wearing state of AR dressing image, make virtual clothes C be rendered as the state of pulling out (pulled-out state).
What note is, when virtual clothes C was changed according to the action of subject, indicative control unit 105 can based on the material information of storing explicitly with virtual clothes C, determine the intensity of variation of virtual clothes C.By this way, but change by the degree of drawing that presents according to the material of virtual clothes C that makes the state of pulling out, can make AR dressing image more true to nature.
The control example 2 of wearing state
When the coordinate from the coordinate of another hand towards ancon had changed chronologically, motion detection unit 103 was judged to be the effective posture (or " action of rolling up sleeves ") that rolls up one's sleeve when the coordinate of a hand of subject.In this case, the part that changes virtual clothes C by the action of rolling up sleeves of carrying out according to subject (for example, by the sleeve unique point partly along ancon direction mobile virtual clothes C), indicative control unit 105 can be controlled the wearing state in AR dressing image, so that the sleeve of virtual clothes C is rendered as roll-up state.
If the hand coordinate of subject changes to chin chronologically from the bottom of neck, motion detection unit 103 is judged to be effective posture of turning up the collar.In this case, change virtual clothes C(more specifically by the action according to subject, move towards chin by the collar unique point partly with virtual clothes C), indicative control unit 105 can change the wearing state in AR dressing image, the state that makes the collar that presents virtual clothes C be holded up.
What note is, rolls up sleeves or action that the action of perpendicular collar is opposite if detect with above-mentioned, can control wearing state in AR dressing image by same way as.
The control example 3 of wearing state
In addition, indicative control unit 105 can be regulated according to the action of subject the waist location of trousers or skirt.Fig. 9 is for the diagram for the control example 4 of dress state of explanation according to the effective posture of basis of the present embodiment.
As shown in the transition diagram of the bone position of Fig. 9 left part, if roughly moving on near coordinate B17 and B21 along vertical direction, the coordinate B9 of subject and B15 change chronologically, motion detection unit 103 is judged to be effective posture of the waist location that reduces clothes.In this case, indicative control unit 105 changes virtual clothes C(more specifically according to the action of subject as shown in the transition diagram of the AR dressing image of Fig. 9 right part, and indicative control unit 105 moves down all unique points of virtual clothes C).By this way, the state that can be lowered by the waist location that presents virtual clothes C changes the wearing state in AR dressing image.
Because be to regulate the waist location of virtual clothes C according to the action of hand, so in AR dressing image, the user can be by for example attempting different styles with trousers or skirt through eminence or the lower of health.
Here, usually according to the size of the superimposed virtual clothes that shows on subject of size conversion (reinventing) of subject.More specifically, for example in Virtual Space as shown in Figure 5, by the bone position according to subject, virtual clothes C is set, changes the size of virtual clothes C with the size of subject with conforming to.Yet whether the user who also exists following situation: AR assembling system wants to attempt actual clothes with specific dimensions fit.
For this reason, indicative control unit 105 can be in Virtual Space as shown in Figure 5, is arranged on by the virtual clothes that will have specific dimensions to equal the real space depth d RealThe degree of depth (distance) d VirtualThereby, show being superimposed upon by the virtual image that the virtual clothes of projection forms the AR dressing image that forms on pickup image.By this way, the user can compare the size of user's body with the size with virtual clothes of specific dimensions.
When the virtual clothes with specific dimensions superimposed when showing on subject in such a way, by further as before describe ground and according to the action of subject, the wearing state of virtual clothes carried out and controlled, the user can confirm whether virtual clothes fit.For example, be adjusted to best waist location by the waist location with virtual trousers or virtual skirt, can confirm whether virtual clothes C is fit at best waist location place.
So just completed with the mode of a plurality of specific examples to according to the present embodiment according to the action of the subject description to the control of dressing state.Then, with the configuration of describing AR in detail and the camera 15 that assembling system 1 comprises.
3-3 is from the demonstration of objective viewpoint
In the above-mentioned AR according to an embodiment of the present disclosure assembling system, by as shown in fig. 1 camera 15 and display device 19 being arranged on the place ahead of subject A, and the upper overlapping virtual clothes of display surface subject A forwards, from when subject A identical viewpoint demonstration AR dressing image when mirror the place ahead tries on a dress.Yet in this case, when subject is difficult to confirm observed by other people (that is, from another angle), how clothes seems.For this reason, by using following several different methods, assembling system according to the AR of the present embodiment of the present disclosure and can generate AR dressing image from objective viewpoint.
Change the position of camera 15
Although camera 15 and display device 19 are placed in the place ahead of subject A in Fig. 1, as shown in Figure 10, the position of camera 15 can change to the rear of subject A.In this case, the rear appearance that camera 15 is taken subject A, and indicative control unit 105 shows the superimposed AR dressing image on the rear appearance of subject A of virtual clothes on the display device 19 that is positioned at subject A the place ahead.By this way, the user can check under the AR dressing state his/her rear appearance.In this case, obviously need to use the rear view that overlaps the virtual clothes on the subject rear appearance, so indicative control unit 105 is drawn virtual clothes based on the virtual clothes data that are used for rear view.
Although provided the example that the position of camera 15 changes to subject A rear, but change to other position by the position with camera 15, such as changing to the side of subject A or becoming with subject A angledly, also can realize the demonstration from the AR dressing image of objective viewpoint is controlled.What note is because virtual clothes towards will according to subject A towards changing, so signal conditioning package 10 can by have from various towards virtual clothes data tackle.Perhaps, the three-dimensional data of the virtual clothes by using modeling in advance, signal conditioning package 10 can draw various towards virtual clothes.
Postpone to show
In addition, even camera 15 is set as illustrated in fig. 1, by making indicative control unit 105 show AR dressing image with time lag ground on display device 19 rather than only show in real time, the user can confirm his/her the AR dressing profile from objective viewpoint.
For example, if carrying out, indicative control unit 105 controls, delay with 1 second on display device 19 shows AR dressing image, so the user spent 2 seconds along general horizontal direction with the rotation once of his/her health after, with the AR dressing image backward before showing 1 second on display device 19 immediately.Like this, by show AR dressing image with certain delay, the user can confirm from objective viewpoint his/her AR dressing profile.
In addition, signal conditioning package 10 can record the AR dressing image (moving image) of subject rotation, and carries out simultaneously F.F. or rewinding according to user's instruction at playback AR dressing image.By this way, the user can confirm his/her AR dressing profile from objective viewpoint.
Reconstruct shows according to 3D shape
Signal conditioning package 10 is by the 3D shape of the captured subject of reconstruct, and shows virtual clothes from any direction on the subject that 3D shape has been reconstructed, can show AR dressing image from objective viewpoint.
For the 3D shape of reconstruct subject, for example, as shown in Figure 11, be provided with a plurality of cameras 15 and sensor 17 and take subject from a plurality of viewpoints.By this way, the 3D shape that signal conditioning package 10 can the Real-time Reconstruction subject.What note is, although in example shown in Figure 11 from a plurality of viewpoints 3D shape of having taken subject and reconstruct, the method for reconstruct 3D shape is not limited to this, and can use two cameras or single camera reconstruct shape.
Like this, virtual clothes is superimposed to be shown through on the subject of Real-time Reconstruction in 3D shape, and passes through operating mouse icon 32 as illustrated in fig. 11, and the user can rotate freely the subject that 3D shape has been passed through reconstruct.By this way, the user can confirm his/her profile from all angles in real time when trying virtual clothes on.
Signal conditioning package 10 can be carried out the reconstruct to 3D shape subsequently based on the subject image that absorbs in advance, and shows the virtual clothes arbitrarily that overlaps on subject.In this case, freely rotate by operating mouse icon 32 subject that 3D shape has been passed through reconstruct, the user can confirm the subject just trying virtual clothes on from all angles.
What note is, if for example as shown in Figure 11, the AR dressing image of subject shown in real time, and except mouse, the equipment that is used for operating mouse icon 32 can be the remote controllers (not shown).
4. conclusion
As previously described, assembling system 1 according to the AR according to above embodiment of the present disclosure, dress state by the action control according to subject, the user can attempt various styles.In addition, assembling system by the AR according to above embodiment of the present disclosure and realize interactive dressing cubicle, AR dressing image more true to nature can be provided.
In addition, according to above embodiment, can confirm from objective viewpoint the AR dressing profile of subject, as, the rear appearance of subject or side profile.
It should be appreciated by those skilled in the art, according to designing requirement and other factors, various modifications, combination, sub-portfolio and replacement can occur, as long as they are in the scope of appending claims or its equivalent.
In addition, mainly described although assembling system for above-mentioned AR the example of trying virtual clothes on, object to be tried on is not limited to clothes, and can be accessories such as glasses, cap and belt.
In addition, to have described subject be people's situation although assembling system for above-mentioned AR, and subject is not limited to the people, and can be animals such as dog or cat.In this case, can provide a kind of AR assembling system, this system shows the image of for example pet clothes on superimposed pickup image having taken animal.
In addition, present technique can also be constructed as follows.
(1) a kind of signal conditioning package comprises:
Motion detection unit detects the action of the subject that has been taken; And
Indicative control unit according to the action that is detected by described motion detection unit, changes the superimposed virtual clothes that shows or at least one the wearing state in accessories on described subject.
(2) signal conditioning package described according to (1), wherein,
Described indicative control unit is according to the operating position of described subject, change in described virtual clothes or accessories at least one partly or entirely.
(3) signal conditioning package described according to (2), wherein,
Described indicative control unit is based on the material information that is associated with described clothes or accessories, determines at least one the change degree in described clothes or accessories.
(4) according to the described signal conditioning package of any one in (1) to (3), wherein,
Described indicative control unit is according to the position of the action that is detected by described motion detection unit, mobile at least one the unique point of feature of shape that illustrates in described virtual clothes or accessories.
(5) according to the described signal conditioning package of any one in (1) to (4), wherein,
Described motion detection unit detects the action that described subject grasps and pulls with hand, and
Described indicative control unit changes the wearing state by stretch at least one part in described virtual clothes or accessories of the direction that pulled along described subject.
(6) according to the described signal conditioning package of any one in (1) to (5), wherein,
Described motion detection unit detects described subject moves from the wrist of another hand a hand towards ancon the action of rolling up sleeves, and
Described indicative control unit is by according to the described action of rolling up sleeves, and sleeve that will the superimposed virtual clothes that shows on described subject moves towards described ancon, changes the wearing state.
(7) according to the described signal conditioning package of any one in (1) to (6), wherein,
Described motion detection unit detects the action that described subject is holded up collar, and
Described indicative control unit is holded up the action of described collar by basis, hold up the collar of wanting the superimposed virtual clothes that shows on described subject, changes the wearing state.
(8) according to the described signal conditioning package of any one in (1) to (7), wherein,
Described motion detection unit detects the action that the waist location of clothes was raised or reduced to described subject, and
Described indicative control unit raises or reduces the action of the waist location of described clothes by basis, regulate the position of wanting the superimposed virtual clothes that shows on described subject, changes the wearing state.
(9) a kind of display control method comprises:
Detect the action of the subject that has been taken; And
In response to the action that detects, change the virtual clothes that shows on described subject or at least one the wearing state in accessories of overlapping.
(10) a kind of computing machine that makes is carried out the following program of processing:
Detect the processing of the action of the subject that has been taken; And
According to the action that detects, change and to overlap the virtual clothes that shows on described subject or at least one the processing of wearing state in accessories.
(11) program described according to (10), wherein,
The processing that changes is according to the operating position of described subject, changes at least one part or all of in described virtual clothes or accessories.
(12) program described according to (11), wherein,
The processing that changes based on described clothes or accessories at least one material information that is associated, determine at least one the change degree in described clothes or accessories.
(13) according to the described program of any one in (10) to (12), wherein,
The processing that changes is according to the position of the action that detects, mobile at least one the unique point of feature of shape that illustrates in described virtual clothes or accessories.
(14) according to the described program of any one in (10) to (13), wherein,
The processing that detects detects the action that described subject grasps and pulls with hand, and
The processing that changes changes described wearing state by stretch at least one part in described virtual clothes or accessories of the direction that has pulled along described subject.
(15) according to the described program of any one in (10) to (14), wherein,
The processing that detects detects described subject moves from the wrist of another hand a hand towards ancon the action of rolling up sleeves, and
The processing that changes is by according to the described action of rolling up sleeves, and sleeve that will the superimposed virtual clothes that shows on described subject moves towards described ancon, changes described wearing state.
(16) according to the described program of any one in (10) to (15), wherein,
The processing that detects detects the action that described subject is holded up collar, and
The action of described collar is holded up in the processing that changes by basis, hold up the collar of wanting the superimposed virtual clothes that shows on described subject, changes described wearing state.
(17) according to the described program of any one in (10) to (16), wherein,
The processing that detects detects the action that the waist location of clothes was raised or reduced to described subject, and
The action of the described waist location of described clothes is raised or is reduced in the processing that changes by basis, regulate the position of wanting the superimposed virtual clothes that shows on described subject, changes described wearing state.
The disclosure comprises the theme with on the November 9th, 2011 of disclosed Topic relative in the Japanese priority patent application JP 2011-245302 that Japan Office is submitted to, and the full content of this patented claim is incorporated into way of reference hereby.

Claims (17)

1. signal conditioning package comprises:
Motion detection unit detects the action of the subject that has been taken; And
Indicative control unit according to the action that is detected by described motion detection unit, changes the superimposed virtual clothes that shows or at least one the wearing state in accessories on described subject.
2. signal conditioning package according to claim 1, wherein,
Described indicative control unit is according to the operating position of described subject, change in described virtual clothes or accessories at least one partly or entirely.
3. signal conditioning package according to claim 2, wherein,
Described indicative control unit is based on the material information that is associated with described clothes or accessories, determines at least one the change degree in described clothes or accessories.
4. signal conditioning package according to claim 2, wherein,
Described indicative control unit is according to the position of the action that is detected by described motion detection unit, mobile at least one the unique point of feature of shape that illustrates in described virtual clothes or accessories.
5. signal conditioning package according to claim 2, wherein,
Described motion detection unit detects the action that described subject grasps and pulls with hand, and
Described indicative control unit changes the wearing state by stretch at least one part in described virtual clothes or accessories of the direction that pulled along described subject.
6. signal conditioning package according to claim 2, wherein,
Described motion detection unit detects described subject moves from the wrist of another hand a hand towards ancon the action of rolling up sleeves, and
Described indicative control unit is by according to the described action of rolling up sleeves, and sleeve that will the superimposed virtual clothes that shows on described subject moves towards described ancon, changes the wearing state.
7. signal conditioning package according to claim 2, wherein,
Described motion detection unit detects the action that described subject is holded up collar, and
Described indicative control unit is holded up the action of described collar by basis, hold up the collar of wanting the superimposed virtual clothes that shows on described subject, changes the wearing state.
8. signal conditioning package according to claim 2, wherein,
Described motion detection unit detects the action that the waist location of clothes was raised or reduced to described subject, and
Described indicative control unit raises or reduces the action of the waist location of described clothes by basis, regulate the position of wanting the superimposed virtual clothes that shows on described subject, changes the wearing state.
9. display control method comprises:
Detect the action of the subject that has been taken; And
In response to the action that detects, change the virtual clothes that shows on described subject or at least one the wearing state in accessories of overlapping.
10. one kind makes computing machine carry out the following program of processing:
Detect the processing of the action of the subject that has been taken; And
According to the action that detects, change and to overlap the virtual clothes that shows on described subject or at least one the processing of wearing state in accessories.
11. program according to claim 10, wherein,
The processing that changes is according to the operating position of described subject, changes at least one part or all of in described virtual clothes or accessories.
12. program according to claim 11, wherein,
The processing that changes based on described clothes or accessories at least one material information that is associated, determine at least one the change degree in described clothes or accessories.
13. program according to claim 11, wherein,
The processing that changes is according to the position of the action that detects, mobile at least one the unique point of feature of shape that illustrates in described virtual clothes or accessories.
14. program according to claim 11, wherein,
The processing that detects detects the action that described subject grasps and pulls with hand, and
The processing that changes changes described wearing state by stretch at least one part in described virtual clothes or accessories of the direction that has pulled along described subject.
15. program according to claim 11, wherein,
The processing that detects detects described subject moves from the wrist of another hand a hand towards ancon the action of rolling up sleeves, and
The processing that changes is by according to the described action of rolling up sleeves, and sleeve that will the superimposed virtual clothes that shows on described subject moves towards described ancon, changes described wearing state.
16. program according to claim 11, wherein,
The processing that detects detects the action that described subject is holded up collar, and
The action of described collar is holded up in the processing that changes by basis, hold up the collar of wanting the superimposed virtual clothes that shows on described subject, changes described wearing state.
17. program according to claim 11, wherein,
The processing that detects detects the action that the waist location of clothes was raised or reduced to described subject, and
The action of the described waist location of described clothes is raised or is reduced in the processing that changes by basis, regulate the position of wanting the superimposed virtual clothes that shows on described subject, changes described wearing state.
CN2012104320512A 2011-11-09 2012-11-02 Information processing apparatus, display control method, and program Pending CN103105927A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-245302 2011-11-09
JP2011245302A JP2013101526A (en) 2011-11-09 2011-11-09 Information processing apparatus, display control method, and program

Publications (1)

Publication Number Publication Date
CN103105927A true CN103105927A (en) 2013-05-15

Family

ID=48223408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012104320512A Pending CN103105927A (en) 2011-11-09 2012-11-02 Information processing apparatus, display control method, and program

Country Status (3)

Country Link
US (1) US20130113830A1 (en)
JP (1) JP2013101526A (en)
CN (1) CN103105927A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105607095A (en) * 2015-07-31 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Terminal control method and terminal
CN106504055A (en) * 2016-10-14 2017-03-15 深圳前海火元素视觉科技有限公司 Auto parts machinery virtuality upgrade method and device
CN108292449A (en) * 2015-03-31 2018-07-17 电子湾有限公司 Three-dimensional garment is changed using gesture
CN108885482A (en) * 2016-03-31 2018-11-23 英特尔公司 Augmented reality in visual field including image
TWI644280B (en) * 2014-08-14 2018-12-11 蔡曜隆 Augmented reality (ar) business card system

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7953648B2 (en) * 2001-11-26 2011-05-31 Vock Curtis A System and methods for generating virtual clothing experiences
US20120287122A1 (en) * 2011-05-09 2012-11-15 Telibrahma Convergent Communications Pvt. Ltd. Virtual apparel fitting system and method
US9113128B1 (en) 2012-08-31 2015-08-18 Amazon Technologies, Inc. Timeline interface for video content
JP5613741B2 (en) * 2012-09-27 2014-10-29 株式会社東芝 Image processing apparatus, method, and program
US9785839B2 (en) * 2012-11-02 2017-10-10 Sony Corporation Technique for combining an image and marker without incongruity
US9389745B1 (en) 2012-12-10 2016-07-12 Amazon Technologies, Inc. Providing content via multiple display devices
US20140201023A1 (en) * 2013-01-11 2014-07-17 Xiaofan Tang System and Method for Virtual Fitting and Consumer Interaction
US10424009B1 (en) * 2013-02-27 2019-09-24 Amazon Technologies, Inc. Shopping experience using multiple computing devices
US11019300B1 (en) 2013-06-26 2021-05-25 Amazon Technologies, Inc. Providing soundtrack information during playback of video content
KR101482419B1 (en) * 2013-07-15 2015-01-16 서울대학교산학협력단 Method and apparatus for generating motion data
CN103473806B (en) * 2013-09-23 2016-03-16 北京航空航天大学 A kind of clothes 3 D model construction method based on single image
US9613424B2 (en) 2013-09-23 2017-04-04 Beihang University Method of constructing 3D clothing model based on a single image
US10080963B2 (en) 2014-03-28 2018-09-25 Sony Interactive Entertainment Inc. Object manipulation method, object manipulation program, and information processing apparatus
JP2015191480A (en) * 2014-03-28 2015-11-02 株式会社ソニー・コンピュータエンタテインメント Information processor, operation method of object and operation program of object
US9575560B2 (en) 2014-06-03 2017-02-21 Google Inc. Radar-based gesture-recognition through a wearable device
JP6396694B2 (en) * 2014-06-19 2018-09-26 株式会社バンダイ Game system, game method and program
US9811164B2 (en) 2014-08-07 2017-11-07 Google Inc. Radar-based gesture sensing and data transmission
US9778749B2 (en) 2014-08-22 2017-10-03 Google Inc. Occluded gesture recognition
US11169988B2 (en) 2014-08-22 2021-11-09 Google Llc Radar recognition-aided search
JP6262105B2 (en) * 2014-09-04 2018-01-17 株式会社東芝 Image processing apparatus, image processing system, image processing method, and program
US10104218B2 (en) * 2014-09-23 2018-10-16 Lg Electronics Inc. Mobile terminal and method for controlling same
US9600080B2 (en) 2014-10-02 2017-03-21 Google Inc. Non-line-of-sight radar-based gesture recognition
US10275935B2 (en) 2014-10-31 2019-04-30 Fyusion, Inc. System and method for infinite synthetic image generation from multi-directional structured image array
US10262426B2 (en) 2014-10-31 2019-04-16 Fyusion, Inc. System and method for infinite smoothing of image sequences
US10016162B1 (en) 2015-03-23 2018-07-10 Google Llc In-ear health monitoring
EP3289434A1 (en) 2015-04-30 2018-03-07 Google LLC Wide-field radar-based gesture recognition
US10310620B2 (en) 2015-04-30 2019-06-04 Google Llc Type-agnostic RF signal representations
KR102002112B1 (en) 2015-04-30 2019-07-19 구글 엘엘씨 RF-based micro-motion tracking for gesture tracking and recognition
US9693592B2 (en) 2015-05-27 2017-07-04 Google Inc. Attaching electronic components to interactive textiles
US10088908B1 (en) 2015-05-27 2018-10-02 Google Llc Gesture detection and interactions
JP6750193B2 (en) * 2015-06-15 2020-09-02 花王株式会社 Walking cycle detection method and detection apparatus
US11095869B2 (en) 2015-09-22 2021-08-17 Fyusion, Inc. System and method for generating combined embedded multi-view interactive digital media representations
US11006095B2 (en) 2015-07-15 2021-05-11 Fyusion, Inc. Drone based capture of a multi-view interactive digital media
US10222932B2 (en) 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
US10242474B2 (en) 2015-07-15 2019-03-26 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10852902B2 (en) 2015-07-15 2020-12-01 Fyusion, Inc. Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity
US10147211B2 (en) 2015-07-15 2018-12-04 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10165199B2 (en) * 2015-09-01 2018-12-25 Samsung Electronics Co., Ltd. Image capturing apparatus for photographing object according to 3D virtual object
US11783864B2 (en) 2015-09-22 2023-10-10 Fyusion, Inc. Integration of audio into a multi-view interactive digital media representation
US10817065B1 (en) 2015-10-06 2020-10-27 Google Llc Gesture recognition using multiple antenna
US9916664B2 (en) * 2016-02-09 2018-03-13 Daqri, Llc Multi-spectrum segmentation for computer vision
CN105681684A (en) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 Image real-time processing method and device based on mobile terminal
CN105786432A (en) * 2016-03-18 2016-07-20 北京奇虎科技有限公司 Method and device for displaying virtual image on mobile terminal
WO2017192167A1 (en) 2016-05-03 2017-11-09 Google Llc Connecting an electronic component to an interactive textile
US11202017B2 (en) 2016-10-06 2021-12-14 Fyusion, Inc. Live style transfer on a mobile device
EP3327544B1 (en) * 2016-11-25 2021-06-23 Nokia Technologies Oy Apparatus, associated method and associated computer readable medium
CN106875470A (en) * 2016-12-28 2017-06-20 广州华多网络科技有限公司 The method and system for changing main broadcaster's image of live platform
US10437879B2 (en) 2017-01-18 2019-10-08 Fyusion, Inc. Visual search using multi-view interactive digital media representations
US10313651B2 (en) 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
US10665022B2 (en) * 2017-06-06 2020-05-26 PerfectFit Systems Pvt. Ltd. Augmented reality display system for overlaying apparel and fitness information
US11069147B2 (en) 2017-06-26 2021-07-20 Fyusion, Inc. Modification of multi-view interactive digital media representation
US10453265B1 (en) * 2018-04-05 2019-10-22 Page International—FZ—LLC Method and device for the virtual try-on of garments based on augmented reality with multi-detection
US10592747B2 (en) 2018-04-26 2020-03-17 Fyusion, Inc. Method and apparatus for 3-D auto tagging
US11321499B2 (en) * 2020-04-13 2022-05-03 Macy's, Inc. System, method, and computer program product for interactive user interfaces
USD957410S1 (en) 2020-04-13 2022-07-12 Macy's, Inc. Display screen or portion thereof with graphical user interface
WO2024043088A1 (en) * 2022-08-25 2024-02-29 日本電気株式会社 Virtual try-on system, virtual try-on method, and recording medium

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1013816A6 (en) * 2000-10-30 2002-09-03 Douelou Nv Production of made to order clothing e.g. for Internet, where the customers inputs their age, weight, height, and collar size into a controller which then determines the clothing pattern
WO2006017079A2 (en) * 2004-07-09 2006-02-16 Gesturerad, Inc. Gesture-based reporting method and system
US7328819B2 (en) * 2004-09-27 2008-02-12 Kimberly-Clark Worldwide, Inc. Self-contained liquid dispenser with a spray pump mechanism
GB2419433A (en) * 2004-10-20 2006-04-26 Glasgow School Of Art Automated Gesture Recognition
US8982109B2 (en) * 2005-03-01 2015-03-17 Eyesmatch Ltd Devices, systems and methods of capturing and displaying appearances
TW200828043A (en) * 2006-12-29 2008-07-01 Cheng-Hsien Yang Terminal try-on simulation system and operating and applying method thereof
US8036416B2 (en) * 2007-11-06 2011-10-11 Palo Alto Research Center Incorporated Method and apparatus for augmenting a mirror with information related to the mirrored contents and motion
US20100030578A1 (en) * 2008-03-21 2010-02-04 Siddique M A Sami System and method for collaborative shopping, business and entertainment
JP2010114299A (en) * 2008-11-07 2010-05-20 Mitsubishi Heavy Ind Ltd Method of manufacturing photoelectric conversion device, and photoelectric conversion device
WO2010073432A1 (en) * 2008-12-24 2010-07-01 株式会社ソニー・コンピュータエンタテインメント Image processing device and image processing method
US9417700B2 (en) * 2009-05-21 2016-08-16 Edge3 Technologies Gesture recognition systems and related methods
US8275590B2 (en) * 2009-08-12 2012-09-25 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
US8564534B2 (en) * 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US20110150271A1 (en) * 2009-12-18 2011-06-23 Microsoft Corporation Motion detection using depth images
US8864581B2 (en) * 2010-01-29 2014-10-21 Microsoft Corporation Visual based identitiy tracking
US8490002B2 (en) * 2010-02-11 2013-07-16 Apple Inc. Projected display shared workspaces
US8457353B2 (en) * 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US20120206348A1 (en) * 2011-02-10 2012-08-16 Kim Sangki Display device and method of controlling the same
US8723789B1 (en) * 2011-02-11 2014-05-13 Imimtek, Inc. Two-dimensional method and system enabling three-dimensional user interaction with a device
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system
US8761437B2 (en) * 2011-02-18 2014-06-24 Microsoft Corporation Motion recognition
WO2012120622A1 (en) * 2011-03-07 2012-09-13 株式会社ノングリッド Electronic mirror system
WO2012126103A1 (en) * 2011-03-23 2012-09-27 Mgestyk Technologies Inc. Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
WO2013123306A1 (en) * 2012-02-16 2013-08-22 Brown University System and method for simulating realistic clothing
US9652043B2 (en) * 2012-05-14 2017-05-16 Hewlett-Packard Development Company, L.P. Recognizing commands with a depth sensor
JP5613741B2 (en) * 2012-09-27 2014-10-29 株式会社東芝 Image processing apparatus, method, and program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI644280B (en) * 2014-08-14 2018-12-11 蔡曜隆 Augmented reality (ar) business card system
CN108292449A (en) * 2015-03-31 2018-07-17 电子湾有限公司 Three-dimensional garment is changed using gesture
US11662829B2 (en) 2015-03-31 2023-05-30 Ebay Inc. Modification of three-dimensional garments using gestures
CN105607095A (en) * 2015-07-31 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Terminal control method and terminal
CN108885482A (en) * 2016-03-31 2018-11-23 英特尔公司 Augmented reality in visual field including image
CN108885482B (en) * 2016-03-31 2023-04-28 英特尔公司 Methods, apparatus, systems, devices, and media for augmented reality in a field of view including an image
CN106504055A (en) * 2016-10-14 2017-03-15 深圳前海火元素视觉科技有限公司 Auto parts machinery virtuality upgrade method and device

Also Published As

Publication number Publication date
JP2013101526A (en) 2013-05-23
US20130113830A1 (en) 2013-05-09

Similar Documents

Publication Publication Date Title
CN103105927A (en) Information processing apparatus, display control method, and program
CN103218773B (en) Message processing device, display control method and program
CN103218506B (en) Information processor, display control method and program
WO2013069360A1 (en) Information processing device, display control method, and program
US11550156B2 (en) Sensor fusion for electromagnetic tracking
CN109840947A (en) Implementation method, device, equipment and the storage medium of augmented reality scene
US8655015B2 (en) Image generation system, image generation method, and information storage medium
CN111199583B (en) Virtual content display method and device, terminal equipment and storage medium
CN104243951A (en) Image processing device, image processing system and image processing method
KR20170003713A (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
KR101892735B1 (en) Apparatus and Method for Intuitive Interaction
US11027195B2 (en) Information processing apparatus, information processing method, and program
CN104353240A (en) Running machine system based on Kinect
JPWO2019171557A1 (en) Image display system
WO2018198909A1 (en) Information processing device, information processing method, and program
JP6596452B2 (en) Display device, display method and display program thereof, and entertainment facility
CN111279410B (en) Display apparatus and display apparatus control method
CN110520904A (en) Display control unit, display control method and program
CN105279354B (en) User can incorporate the situation construct system of the story of a play or opera
KR102287939B1 (en) Apparatus and method for rendering 3dimensional image using video
JP2018121132A (en) Image providing system, image providing method, and image providing program
US20220028123A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130515