WO2017147484A1 - Personal life story simulation system - Google Patents

Personal life story simulation system Download PDF

Info

Publication number
WO2017147484A1
WO2017147484A1 PCT/US2017/019444 US2017019444W WO2017147484A1 WO 2017147484 A1 WO2017147484 A1 WO 2017147484A1 US 2017019444 W US2017019444 W US 2017019444W WO 2017147484 A1 WO2017147484 A1 WO 2017147484A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
animated
images
scene
facial
Prior art date
Application number
PCT/US2017/019444
Other languages
French (fr)
Inventor
Ting CHU
Jiancheng XU
Original Assignee
Vivhist Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivhist Inc. filed Critical Vivhist Inc.
Priority to KR1020187027477A priority Critical patent/KR20180132063A/en
Priority to US16/079,889 priority patent/US20190051032A1/en
Priority to JP2018545266A priority patent/JP2019514095A/en
Priority to EP17757344.1A priority patent/EP3420534A4/en
Priority to CN201780018702.4A priority patent/CN109416840A/en
Publication of WO2017147484A1 publication Critical patent/WO2017147484A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system for generating an animated life story of a person is shown. The system may capture an image of the person's face and generate a computer-animated simulation of the person's face. The computer-animated simulation of the person's face may be superimposed upon a computer-generated based on personal historical data of the person so that a computer-generated life story of the person from an earlier period of time to the present may be generated as a movie or slideshow.

Description

PERSONAL LIFE STORY SIMULATION SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Prov. Pat. App. Ser. No. 62/299,391, filed on February 24, 2016, the entire contents of which is expressly incorporated herein by reference.
STATEMENT RE: FEDERALLY SPONSORED RESEARCH/DEVELOPMENT Not Applicable
BACKGROUND
The various embodiments and aspects described herein relate to a personal life story simulation system.
In today's electronic world, people create slideshows of their life. In order to do so, they will aggregate photographs of themselves, friends and family and places that they have been in order to create a story of themselves through still photos and/or videos. If the person has a video of themselves, they may interject these videos into the slideshow when appropriate or create a series of videos that are spliced together to create the story of themselves. However, not everyone has photos and videos of themselves or of their friends and families or places that they have been to in order to create the story. Older people may not have photos and videos of their childhood. For this reason, not everyone will be able to create a story of themselves with the videos and photos that they have at hand.
Accordingly, there is a need in the art for a system and method for creating a story of a person.
BRIEF SUMMARY
An electronic platform is disclosed herein which allows a user to customize a simulated life story with his or her facial features. The electronic platform takes a picture of a face of the user then animates the picture and superimposes the animated facial feature onto an animated person into scenes of a movie or slideshow selected based on personal historical data of the user. By doing so, even if the user does not have a photo or video of themselves in a particular place or time period (e.g. childhood), the simulation of the life story of the user is generated by the personal historical data provided by the user and the facial photo of the user which is superimposed onto a computer generated character or body so that the computer generated character resembles the user.
More particularly, a computer implemented method for aggregating one or more facial images of the user, historical data about the user's current and past life situation and merging images and the historical data to generate a simulated story about the user, the method comprising the steps of collecting historical user data with a software application; collecting one or more facial images of the user; animating the one or more facial images; merging the animated facial image of the user onto an animated character in an animated scene based on the historical user data; and generating a slideshow or movie clip from the merged animated facial image and animated scene. The length of the slideshow or movie clip depends on the amount of information obtained from the user.
In the method, the animated scene may be based on stock images of places, occupations, sports and living or working environments.
The method may further comprise the steps of altering the animated facial image of the user to account for age of the user. The altering step may include the step of digitally smoothing facial features of the user or adding wrinkles to an animated facial image of the user to make the user appear younger or older.
In the method, the animated scene may include premade animated scenery. The method may further comprise steps of presenting a preselected scene from the slideshow or movie clip; and providing an option to include customized information into select areas of the scene on buildings, people and/or objects.
The option may be a drop down list of trademarks, words, images or combinations thereof. In the method, the customized information added into the preselected scene may be transferred to other scenes in the slideshow or movie clip.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features and advantages of the various embodiments disclosed herein will be better understood with respect to the following description and drawings, in which like numbers refer to like parts throughout, and in which: Figure 1 illustrates a schematic view of a personal life story simulation system;
Figure 2 illustrates a screen of a smart phone used to acquire a headshot photo image of the user;
Figure 3 illustrates the screen of the smart phone after the headshot photo image of the user is acquired and allows a user to confirm that the headshot photo image is acceptable or rejected;
Figure 4 illustrates the screen of the smart phone allowing the user to indicate whether the user is a male or female;
Figure 5 illustrates the screen of the smart phone showing a body of a computer generated character which can be altered by the user so that the computer generated character reflects the body type of the user;
Figure 6 illustrates the screen of the smart phone showing an age profile screen;
Figure 7 illustrates the screen of the smart phone showing a childhood memories profile screen;
Figure 8 illustrates the screen of the smart phone showing a teenhood memories profile screen;
Figure 9 illustrates the screen of the smart phone showing an adulthood memories profile screen;
Figure 10 illustrates the screen of the smart phone showing a senior hood memories profile screen;
Figure 11 illustrates the screen of the smart phone showing a city profile screen;
Figure 12 illustrates the screen of the smart phone showing an education profile screen;
Figure 13 illustrates the screen of the smart phone showing an occupation profile screen;
Figure 14 illustrates the screen of the smart phone showing a shape profile screen;
Figure 15 illustrates the screen of the smart phone showing a personal or business advertisement preview screen; Figure 16 illustrates the screen of the smart phone showing a play story screen; and
Figure 17 illustrates the screen of the smart phone showing a story video clip. DETAILED DESCRIPTION
Referring now to the drawings, a computer implemented method for aggregating one or more facial images of the user, historical data about the user' s current and past life situation and merging images and the historical data to generate a simulated story about the user is disclosed. An application on a mobile device (e.g., smart phone) 10 or desktop computer may guide the user in collecting the images and the historical data from the user. The application may transmit the images and the historical data about the user to a cloud-based server 12. The images and historical data about the user may be stored in a user data repository 14 on the cloud based server 12. Based on the historical data entered by the user, the server 12 selects the appropriate image(s) and videos that correspond to the user's life. The server 12 superimposes the facial images of the user onto images and videos and generates a movie or slideshow 18 of the user's life.
The images and videos may be created virtually or be from third-party stock images and video content services 16 (e.g., bigstockphoto.com or istockphoto.com). The server may have a repository of images, stock images, images generated in house, videos, stock videos and videos generated in-house.
Referring now to Figure 1, mobile devices 10 in the form of a smart phone or tablet are shown. Additionally, a desktop computer 20 is also shown. The computer implemented method may be initiated by launching an app on the smart phone or tablet computer 10 or starting a program on the desktop computer 20. Upon start of the application, a start button 22 may be shown which guides the user through steps to aggregate one or more images of the user and historical data about the user so that a movie or slideshow 18 of the user's life may be simulated and shown to the user or another person.
Upon clicking the start button 22, the first step is to acquire a headshot photo image of the user. Referring to Figure 2, the application displays a screen and a camera image section 24 that obtains images from the front or rear camera of the mobile device 10. The application sets the front camera as the default camera. If the user uses the back camera, the user can depress the front and back camera switch button 26 to switch between the front and back cameras. The camera image section 24 may have a crosshair 28a, b which instructs the user to align the user's eyes along a horizontal crosshair 28a and the user's nose along a vertical crosshair 28b. When the user's face is properly aligned to the crosshairs 28a, b, the user may tap on the screen 30 to capture the image shown in the camera image section 24. Before capturing the image, the user may depress the fill light button 30 in order to adjust lighting of the person. The fill light option 30 may be turned on only when using the back camera so that the camera's light can illuminate the user's face. This is useful when a friend of the user utilizes the mobile device 10 to capture the facial image of the user. If the user is capturing his or her facial image by way of a selfie, then the user may depress the front and back camera switch button 26 to access the front camera. If the captured image is unsatisfactory, the user may depress the cancel button 32. Alternatively, the user may upload a facial image of the user by way of the photo gallery on the mobile device 10.
It is also contemplated that the facial image may be captured by uploading the facial image of the user from a desktop computer 20. The desktop computer 20 may also be used to capture the facial image of the user. In particular, the desktop computer 20 may have a camera which can capture the facial image of the user.
The facial images and the historical data entered in by the user may be associated with a unique identifier stored on the user data depository on the server 12. As such, this provides versatility and ease of use to the user so that the user can switch between mobile devices 10 and computers 20 as the user uploads images and enters historical data to complete the user' s profile and all of the required and desired facial images and historical data about the user. The facial image can be captured by the mobile device 10. The user can log out and upload and associate historical data about the user to the unique identifier on the desktop computer 20, and vice versa. In this regard, the user must login to the system in order to create the unique identifier which will store all of the information including but not limited to the facial images and the historical data of the user on the server 12.
In order to capture or upload photos from the mobile devices 10 photo gallery, the user may depress a photo gallery button 34 which accesses the mobile devices 10 photo gallery and allows the user to select a photo to be uploaded to the user data repository 14 on the server 12 through the app of the mobile device 10.
After tapping the screen 30 to capture the image, the user is asked to either cancel or confirm the facial image shown in the camera image section 24 by depressing the cancel button 36 or the confirm button 38 as shown in Figure 3. The user may also depress a support and help button 40 if the user is having difficulty inputting data and uploading images or utilizing the application.
Upon depressing the confirm button 38, the user is led to the screen shown in Figure 4. The user selects his or her gender male or female by depressing either the male button 42 or the female button 44. The user can also retake the photo by depressing the previous button 46 which leads the user back to the image capture screen shown in Figure 2. Upon depressing either the male or female buttons 42, 44, the user's facial image 48 is superimposed upon a body 50. The user can depress an about and information button 52 to find out more about the application, and add story character button 54. The user may also depress a complete user profile button 56 and a volunteering function button 58. The user may also depress a play user's life story movie button 60 when the user has inputted a sufficient amount of historical data about the user and taken the facial image discussed above.
Upon depressing the complete user profile button 56, one or more data categories 62a-n are displayed on the screen, as shown in Figure 6. Data categories 62a-e are shown. Data category 62a is for age. Data category 62b is for the city. Data category 62c is for education. Data category 62d is for occupation. Data category 62e is for physical shape. Additional data categories may be shown by swiping the screen from right to left in the data categories section 64 of the screen of the mobile device 10. Data categories 62f and following will be shown on the screen. Data category 62f is for eyewear. Data category 62g is for hair. Data category 62h is for dress or clothing. Additional data categories may be incorporated into the computer implemented method and shown by depressing data category 62i.
Upon depressing data category 62a for age, a visual representation of various age stages of a person's life is shown immediately above the data categories section 64 in the category options section 66. In the category options section 66, a toddler 68a, grade school 68b, teen 68c, adult 68d and senior 68e images are shown. The user may depress one of the images to enter historical data about that age of the user. By way of example and not limitation, the user may depress the childhood image 68b at which time the user will be directed to the screen shown in Figure 7. In the category options section 66, the user can enter in various information (i.e., historical data) that is relevant to that age. By way of example and not limitation, the user can enter in the favorite game of the user when he or she was 3 to 12 years old. By swiping left or right in the category options section 66, other data can be entered in such as a profound memory, favorite toy, unforgettable activity or familiar scene.
Referring back to Figure 6, the user may depress the teen image 68c and be directed to the screen shown in Figure 8. In the category options section 66, the user may enter various information that is relevant to that age. By way of example and not limitation, the user may enter in the user's favorite game, favorite toy, profound memory, unforgettable activity and familiar scene. These other items may be entered into by swiping left and right on the screen in the category option section 66.
Referring back to Figure 6, the user may now depress the adult image 68d and be directed to the screen shown in Figure 9. In the category options section 66, the user may enter in various information relevant to their age. By way of example and not limitation, the user may enter in the user's favorite game, favorite toy, profound memory, unforgettable activity or familiar scene.
The user may enter in information related to the user for the infant age by depressing infant image 68a or senior age by depressing senior image 68e which leads the user to the options shown in Figure 10.
For each age range, the user may enter historical data regarding city, education, occupation, memory as discussed, eyewear, hair, dress, shape.
The user may also depress the city data category 62b. In the category option 66, the user may enter in the city name that the user lives in. The user may click on the "please enter your city" link and enter in the city in which the user lives in. The computer implemented method may request the user to enter in one or more cities based on the user's age.
Referring now to Figure 12, the user may depress the data category 62c and be provided with options to enter in the user's high school name and college or university name. Although not shown, the user may be presented with the option to enter in the user's intermediate school name, grade school name and higher education names. This may be done by allowing the user to slide left and right in the category options section 66.
Referring now to Figure 13 the user may depress the data category 62d to specify his or her occupation. The occupation may be selected by visual
representation as shown in Figure 13 in the category option section 66 or may be a textual entry by way of the on screen keyboard.
Referring now to Figure 14 the user may depress data category 62e upon which the category option section 66 illustrates a variety of body types for the gender of the user. The user may select the body type most representative of the user. The user may tap the done button 68 which saves the historical data of the user in the user data repository 14 on the server 12.
As discussed above, the user may access more data categories 62f-n by swiping in the data categories of section 64 right to left. Upon depressing these additional data category buttons 62f-n, the user is presented with the option to insert more historical data about the user regarding these other types of categories.
Referring now to Figure 15, a scene from the simulated user life story movie and/or slideshow is shown. In this regard, the user may include one or more logos within the scene. The scene may be displayed by depressing the button 62j.
Optionally, this feature may be a member only option wherein the member be offered membership if the user provides his or her contact information (e.g, name, address, phone number, email address, other personal information and/or combinations thereof). As a further option, the member may be required to also pay for the ability to place the logos, trademarks, words, customized words and/or graphics into the scene. Additionally, companies, cities, places, people may pay for the option of having their trademark, logo, information show up and be in the option list presented to the user so that the company specific information is placeable into the scene. Upon depressing the ads icon 62j, one or more scenes from an animated slideshow or movie may be presented to the user and the user may be given the option to include logo(s) or other information identified above in the slideshow or movie. The user can touch areas 82a, b, c, d-n on the screen to input the company specific information. The user can type in a trademark. Alternatively, the user may be presented with options which retrieved information from a database of company specific information that can be inserted into the areas 82a-n. The options on the company specific information may pop up as a list of options for the user to select any one of the various possible company specific information that can be inserted into the areas 82a-n. After the user has customized the scene, the user may then depress a done button 80. The user may be presented with additional screens to input additional company specific information into the scene. Alternatively, the user may depress the area 82a which will bring up a list of options that can be inserted into the scene. The user may select one of those options. The user may also type in information into the area 82a. When the user selects one of the options or types in information into the area 82a either through the keyboard, photo gallery or option list, the selected information is propagated into the scene in areas 82b, c, d - n. When the user is finished with inputting trademarks and logos into one or more scenes of the movie or slideshow, the user may depress done button 80, at which point the user will be directed to the screen shown in Figure 16.
The user may view the simulated user life story by depressing the done button 68 at any time during the process of entering the user data discussed above. If insufficient amount of data has been entered, then the done button 80 may be inactivated and shaded out to indicate the same to the user. Once sufficient user historical data has been entered into the application and saved to the user data repository 14, the done button 80 may be activated. Upon depressing the done button 68, the user is led to the screen shown in Figure 16.
The simulated user life story movie and/or slideshow is shown on the screen.
The user may select the movie or slideshow by depressing the play button 70 the movie or slideshow is simulated and that the actual photo of the user' s face is incorporated into stock images and video is retrieved from third-party stock images and video services 16 and compiled into a slideshow that depicts the chronological life of the user. Based on the information provided by the user, additional movies or slideshows can be generated and presented to the user in the movie options section 72. In Figure 16, three different movie options 72a-c are shown but additional ones can also be presented to the user in the movie options section by allowing the user to swipe left and right. The movie clips may be downloaded and shared by depressing the download button 74 or the share button 76.
In generating the movie clip or slideshow of the user, the facial images of the user may be altered to match the user's age. By way of example and not limitation, the user may capture current facial images of the user when he or she is middle aged. The facial image of the user at their current age is not in the slideshow or movie. However, the facial images of the user are transformed into a computer animated face and it is the computer animated face that is used in the slideshow or movie. Moreover, the computer animated face of the user may be altered or computer-generated in order to make the user look younger to fit the particular age of the user depicted in a particular scene. For example, if the user is an adult, the computer animated image may be altered to resemble the user as a child and that childlike computer animated image would be used for childhood memories in the slideshow or movie. Rather, the facial image is altered to a more youthful appearance so that the youthful appearing facial image of the user is merged onto the background images for that particular timeframe. The facial image of the user is altered to the appropriate age of the user.
Figure 17 shows a series of still images that are chronologically aggregated and assembled in the user's life story by way of simulation.
The video or movie may also be displayed on a virtual reality eyewear 78 that allows the user to scan the scene left and right.
The above description is given by way of example, and not limitation. Given the above disclosure, one skilled in the art could devise variations that are within the scope and spirit of the invention disclosed herein. Further, the various features of the embodiments disclosed herein can be used alone, or in varying combinations with each other and are not intended to be limited to the specific combination described herein. Thus, the scope of the claims is not to be limited by the illustrated
embodiments.

Claims

WHAT IS CLAIMED IS:
1. A computer implemented method for aggregating one or more facial images of the user, historical data about the user' s current and past life situation and merging images and the historical data to generate a simulated story about the user, the method comprising the steps of:
collecting historical user data with a software application; collecting one or more facial images of the user;
animating the one or more facial images;
merging the animated facial image of the user onto an animated character in an animated scene based on the historical user data;
generating a slideshow or movie clip from the merged animated facial image and animated scene.
2. The method of Claim 2 wherein the animated scene is based on stock images of places, occupations and sports.
3. The method of Claim 1 further comprising the steps of altering the animated facial image of the user to account for age of the user.
4. The method of Claim 3 wherein the altering step includes the step of digitally smoothing facial features of the user or adding wrinkles to an animated facial image of the user to make the user appear younger or older.
5. The method of Claim 1 wherein the animated scene includes a premade animated scenery.
6. The method of Claim 1 further comprising steps of:
presenting a preselected scene from the slideshow or movie clip;
providing an option to include customized information into select areas of the scene on buildings, people and/or objects;
7. The method of Claim 6 wherein the option is a drop down list of trademarks, words, images or combinations thereof.
8. The method of Claim 6 wherein the customized information added into the preselected scene is transferred to other scenes in the slideshow or movie clip.
PCT/US2017/019444 2016-02-24 2017-02-24 Personal life story simulation system WO2017147484A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020187027477A KR20180132063A (en) 2016-02-24 2017-02-24 Personal life story simulation system
US16/079,889 US20190051032A1 (en) 2016-02-24 2017-02-24 Personal life story simulation system
JP2018545266A JP2019514095A (en) 2016-02-24 2017-02-24 Private life story simulation system
EP17757344.1A EP3420534A4 (en) 2016-02-24 2017-02-24 Personal life story simulation system
CN201780018702.4A CN109416840A (en) 2016-02-24 2017-02-24 Personal lifestyle story simulation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662299391P 2016-02-24 2016-02-24
US62/299,391 2016-02-24

Publications (1)

Publication Number Publication Date
WO2017147484A1 true WO2017147484A1 (en) 2017-08-31

Family

ID=59685686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/019444 WO2017147484A1 (en) 2016-02-24 2017-02-24 Personal life story simulation system

Country Status (6)

Country Link
US (1) US20190051032A1 (en)
EP (1) EP3420534A4 (en)
JP (1) JP2019514095A (en)
KR (1) KR20180132063A (en)
CN (1) CN109416840A (en)
WO (1) WO2017147484A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11062387B2 (en) 2018-11-16 2021-07-13 Money Experience, Inc. Systems and methods for an intelligent interrogative learning platform

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10009536B2 (en) 2016-06-12 2018-06-26 Apple Inc. Applying a simulated optical effect based on data received from multiple camera sensors
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US10375313B1 (en) 2018-05-07 2019-08-06 Apple Inc. Creative camera
DK201870623A1 (en) 2018-09-11 2020-04-15 Apple Inc. User interfaces for simulated depth effects
US10645294B1 (en) 2019-05-06 2020-05-05 Apple Inc. User interfaces for capturing and managing visual media
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
US11039074B1 (en) 2020-06-01 2021-06-15 Apple Inc. User interfaces for managing media
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
US11769198B1 (en) * 2020-10-09 2023-09-26 Wells Fargo Bank, N.A. Profile based video creation
US11140360B1 (en) 2020-11-10 2021-10-05 Know Systems Corp. System and method for an interactive digitally rendered avatar of a subject person
US11463657B1 (en) 2020-11-10 2022-10-04 Know Systems Corp. System and method for an interactive digitally rendered avatar of a subject person
US11582424B1 (en) 2020-11-10 2023-02-14 Know Systems Corp. System and method for an interactive digitally rendered avatar of a subject person
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030051255A1 (en) * 1993-10-15 2003-03-13 Bulman Richard L. Object customization and presentation system
US20070261071A1 (en) * 2006-04-20 2007-11-08 Wisdomark, Inc. Collaborative system and method for generating biographical accounts
US20080158230A1 (en) * 2006-12-29 2008-07-03 Pictureal Corp. Automatic facial animation using an image of a user
US20090028380A1 (en) * 2007-07-23 2009-01-29 Hillebrand Greg Method and apparatus for realistic simulation of wrinkle aging and de-aging

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0729271A3 (en) * 1995-02-24 1998-08-19 Eastman Kodak Company Animated image presentations with personalized digitized images
CN101584001B (en) * 2006-12-20 2012-06-13 伊斯曼柯达公司 Automated production of multiple output products
SG152952A1 (en) * 2007-12-05 2009-06-29 Gemini Info Pte Ltd Method for automatically producing video cartoon with superimposed faces from cartoon template
US8907984B2 (en) * 2009-07-08 2014-12-09 Apple Inc. Generating slideshows using facial detection information
US9466142B2 (en) * 2012-12-17 2016-10-11 Intel Corporation Facial movement based avatar animation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030051255A1 (en) * 1993-10-15 2003-03-13 Bulman Richard L. Object customization and presentation system
US20070261071A1 (en) * 2006-04-20 2007-11-08 Wisdomark, Inc. Collaborative system and method for generating biographical accounts
US20080158230A1 (en) * 2006-12-29 2008-07-03 Pictureal Corp. Automatic facial animation using an image of a user
US20090028380A1 (en) * 2007-07-23 2009-01-29 Hillebrand Greg Method and apparatus for realistic simulation of wrinkle aging and de-aging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3420534A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11062387B2 (en) 2018-11-16 2021-07-13 Money Experience, Inc. Systems and methods for an intelligent interrogative learning platform

Also Published As

Publication number Publication date
KR20180132063A (en) 2018-12-11
EP3420534A1 (en) 2019-01-02
JP2019514095A (en) 2019-05-30
US20190051032A1 (en) 2019-02-14
CN109416840A (en) 2019-03-01
EP3420534A4 (en) 2019-10-09

Similar Documents

Publication Publication Date Title
US20190051032A1 (en) Personal life story simulation system
US10474336B2 (en) Providing a user experience with virtual reality content and user-selected, real world objects
US20180330152A1 (en) Method for identifying, ordering, and presenting images according to expressions
JP2021534474A (en) Proposing content in an augmented reality environment
JP2019536131A (en) Controls and interfaces for user interaction in virtual space
CN114930399A (en) Image generation using surface-based neurosynthesis
CN107111889A (en) Use the method and system of the image of interactive wave filter
EP4260286A1 (en) Virtual clothing try-on
WO2021135197A1 (en) State recognition method and apparatus, electronic device, and storage medium
CN101836210A (en) Digital multimedia in the virtual world is shared
CN115735229A (en) Updating avatar garments in messaging systems
US20140223474A1 (en) Interactive media systems
EP4246963A1 (en) Providing shared augmented reality environments within video calls
US20220076492A1 (en) Augmented reality messenger system
WO2022005838A1 (en) Travel-based augmented reality content for images
US20160320833A1 (en) Location-based system for sharing augmented reality content
US20160035016A1 (en) Method for experiencing multi-dimensional content in a virtual reality environment
CN109074680A (en) Realtime graphic and signal processing method and system in augmented reality based on communication
Peterson Islamic fashion images on Instagram and the visuality of Muslim women
US20230345084A1 (en) System, method, and program for distributing video
US11409788B2 (en) Method for clustering at least two timestamped photographs
US20210075754A1 (en) Method for sharing a photograph
Grenader et al. The VideoMob interactive art installation connecting strangers through inclusive digital crowds
US20210072869A1 (en) Method for retrieving at least two captured photographs
KR101268640B1 (en) Display system and method for large screen

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018545266

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187027477

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2017757344

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017757344

Country of ref document: EP

Effective date: 20180924

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17757344

Country of ref document: EP

Kind code of ref document: A1