US20230333720A1 - Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface - Google Patents
Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface Download PDFInfo
- Publication number
- US20230333720A1 US20230333720A1 US18/341,155 US202318341155A US2023333720A1 US 20230333720 A1 US20230333720 A1 US 20230333720A1 US 202318341155 A US202318341155 A US 202318341155A US 2023333720 A1 US2023333720 A1 US 2023333720A1
- Authority
- US
- United States
- Prior art keywords
- estimated
- information
- exchange
- processors
- assessment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 65
- 230000008569 process Effects 0.000 claims abstract description 36
- 238000010191 image analysis Methods 0.000 claims abstract description 6
- 238000010801 machine learning Methods 0.000 claims description 50
- 230000015654 memory Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 12
- 230000009471 action Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0278—Product appraisal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0283—Price estimation or determination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0633—Lists, e.g. purchase orders, compilation or processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- a display of a user device may display a user interface (e.g., a graphical user interface).
- a user interface may permit interactions between a user of the user device and the user device.
- the user may interact with the user interface to operate and/or control the user device to produce a desired result.
- the user may interact with the user interface of the user device to cause the user device to perform an action.
- the user interface may provide information to the user.
- the server device may include one or more memories and one or more processors communicatively coupled to the one or more memories.
- the server device may be configured to obtain image data that depicts a set of objects associated with a user.
- the server device may be configured to process, using at least one image analysis technique, the image data to determine identification information for each object of the set of objects.
- the server device may be configured to obtain exchange data related to at least one exchange log of the user.
- the server device may be configured to determine, based on the exchange data and the identification information, estimated exchange information for each object of a subset of objects of the set of objects.
- the server device may be configured to determine, based on the estimated exchange information, estimated assessment information for each object of the subset of objects.
- the server device may be configured to generate, based on the estimated assessment information, the presentation information for display via the GUI.
- Some implementations described herein relate to a non-transitory computer-readable medium that stores a set of instructions for a device.
- the set of instructions when executed by one or more processors of the device, may cause the device to obtain image data that depicts a set of objects associated with a user.
- the set of instructions when executed by one or more processors of the device, may cause the device to process the image data to determine identification information for each object of the set of objects.
- the set of instructions when executed by one or more processors of the device, may cause the device to obtain exchange data related to at least one exchange log of the user.
- the set of instructions when executed by one or more processors of the device, may cause the device to determine, based on the exchange data and the identification information, estimated assessment information for each object of a subset of objects of the set of objects.
- the set of instructions when executed by one or more processors of the device, may cause the device to provide at least some of the estimated assessment information for display via a graphical user interface GUI.
- the method may include obtaining, by a device, image data that depicts an object associated with a user.
- the method may include processing, by the device, the image data to determine identification information associated with the object.
- the method may include obtaining, by the device, exchange data related to at least one exchange log of the user.
- the method may include determining, by the device and based on the exchange data and the identification information associated with the object, estimated assessment information associated with the object.
- the method may include generating, by the device and based on the estimated assessment information, the presentation information for display via the GUI.
- FIGS. 1 A- 1 D are diagrams of an example implementation relating to generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- FIG. 2 is a diagram illustrating an example of training and using a machine learning model in connection with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- FIG. 3 is a diagram of an example environment in which systems and/or methods described herein may be implemented.
- FIG. 4 is a diagram of example components of one or more devices of FIG. 3 .
- FIG. 5 is a flowchart of an example process relating to generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- a person may need to determine an estimated assessment amount (e.g., an estimated value) of one or more objects that the person owns. For example, when shopping for an insurance policy (e.g., a homeowner's insurance policy or a renter's insurance policy), the person may need to determine the estimated assessment information to determine an amount of insurance coverage that is needed to cover the one or more objects (e.g., as “personal property” or “valuable articles”). In an additional example, when submitting an insurance claim for damage to or destruction of the one or more objects, the person may need to determine the estimated assessment information to determine an amount of the insurance claim.
- an estimated assessment amount e.g., an estimated value
- the person may use a software program or a website application to itemize and estimate assessment amounts of the one or more objects.
- this requires an excessive use of computing resources (e.g., processing resources, memory resources, communication resources, and/or power resources, among other examples) of a user device, one or more server devices, or other devices, for the user to interact with a GUI (e.g., that is presented on the user device via the software program or the website application) for a user to input this information.
- computing resources e.g., processing resources, memory resources, communication resources, and/or power resources, among other examples
- computing resources are used for the person to navigate and interact with a large number of pages, fields, menus, and/or other elements to input information that identifies the one or more objects, that links the one or more objects to exchange information (e.g., receipts of purchase exchanges for the one or more objects), that estimates assessment values, and/or that provides other information associated with the one or more objects.
- This superfluous navigation and interaction with a GUI typically creates a poor user experience for the person as well.
- a device e.g., a user device and/or server device that obtains image data associated with one or more images.
- the one or more images may have been captured by at least one camera of the user device and may depict scenes associated with a user of the user device.
- the one or more images may depict “selfie” images of the user, images of friends of the user, and/or images of events or parties attend by the user, among other examples.
- the one or more images may depict objects, such as personal objects of the user that the user purchased.
- a selfie image of the user may include a depiction of furniture or a kitchen appliance that is owned by the user.
- the device may process the image data to determine identification information associated with a set of objects that are depicted in the image data.
- the device may process the selfie image described above to identify the furniture and/or the kitchen appliance (but not the user).
- the device may obtain exchange data related to at least one exchange log of the user (e.g., data that indicates one or more exchanges, such as purchase exchanges, of the user). Accordingly, the device, based on the identification information associated with the set of objects and the exchange data, may determine estimated assessment information for each object of the set of objects. The estimated assessment information for each object, of the set of objects, may indicate an estimated assessment amount (e.g., an estimated present value) of the object. The device then may generate presentation information (e.g., based on the estimated assessment information) for display via a GUI (e.g., on a display of the user device).
- presentation information e.g., based on the estimated assessment information
- the presentation information may include, for each object of the set of objects that are associated with the user and that are depicted in the image data, some or all of the identification information for the object and/or some or all of the estimated assessment information for the object.
- the presentation information may identify estimated assessment amounts for the furniture and the kitchen appliance, respectively.
- some implementations described herein automatically compile an inventory of objects and estimated assessment amounts as presentation information and provide the presentation information for display via a GUI. This reduces utilization of computing resources by devices that would otherwise be needed to compile the presentation information and/or to present an interface for entering the presentation information (e.g., as described above in relation to a software program or a website application). Further, a user experience is improved by automatically compiling and providing the presentation information for display via the GUI and thereby minimizing an amount of navigation and/or interaction by the user to input and/or generate the presentation information. Additionally, the presentation information can be used, for example, to identify an insurance policy (e.g., a home insurance policy) that adequately covers objects and/or to automatically prepare an insurance claim when the objects have been damaged or destroyed by an insured event.
- an insurance policy e.g., a home insurance policy
- FIGS. 1 A- 1 D are diagrams of an example 100 associated with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- example 100 includes a user device, a server device, and a host server. These devices are described in more detail in connection with FIGS. 3 and 4 .
- the user device may capture one or more images.
- the user device may include at least one camera and a user of the user device may interact with the user device (e.g., via an image capture application of the user device) to cause the at least one camera to capture the one or more images.
- the user device may store the one or more images in a data structure that is configured to store images captured by the at least one camera of the user device, such as an electronic photo album or another type of image repository.
- the data structure may be included in the user device, or may be included in another device that is accessible to the user device (e.g., via a network), such as the server device.
- the user device may upload the one or more images to a device (e.g., another server device) associated with an online service, such as a social media service.
- a device e.g., another server device
- an online service such as a social media service.
- the user of the user device may interact with the user device (e.g., via an online service application) to log in to an online service account and may cause the user device to upload the one or more images to the device associated with the online service account.
- each image, of the one or more images may be associated with a scene (e.g., that is within a field of view (FOV) of the at least one camera of the user device) at a physical location.
- the image may depict, for example, one or more people, one or more objects, one or more animals, and/or one or more structures (e.g., buildings) in the scene at the physical location.
- an image may depict two people and multiple objects, such as a table, a television, and a clock. Over a period of time (e.g., a period of hours, days, weeks, months, or years), the user device may capture multiple images of different scenes.
- image data associated with the multiple images may depict multiple scenes, multiple people, multiple objects, multiple animals, and/or multiple structures at multiple different instants of time during the period of time.
- the image data may depict a set of objects (e.g., one or more objects of the multiple objects) that are associated with a particular person, such as the user of the user device.
- the image data may depict a set of objects that are owned by the user.
- the user device may include an object assessment application that is configured to facilitate identifying and assessing objects (e.g., identifying and determining a monetary value of objects) depicted in image data.
- the user of the user device may interact with the user device (e.g., via the object assessment application) to cause the user device to obtain image data.
- the user device may identify the data structure that is configured to store images captured by the at least one camera of the user device (e.g., the electronic photo album) and may obtain, from the data structure, image data that is associated with all of the images stored in the data structure.
- the user may interact with the user device (via the object assessment application) to select a set of images (e.g., one or more images) of all of the images stored in the data structure, which may cause the user device to obtain, from the data structure, image data that is associated with the set of images.
- a set of images e.g., one or more images
- the user device may send the image data (e.g., that was obtained by the user device) to the server device (e.g., to facilitate identifying and assessing objects depicted in the image data).
- the user device may send the image data to the server device by sending the image data to a host server of a network, which may send the image data to the server device.
- the server device may obtain the image data from an online service (e.g., a social media service).
- the user of the user device may interact with the user device (e.g., via the object assessment application) to provide, to the server device, information indicating an online service account of the user (e.g., information indicating a social media profile of the user).
- the server device may communicate with a device associated with the online service account (e.g., another server device) to obtain the image data (e.g., to download image data associated with images posted to the social media profile of the user).
- the server device may obtain image data that depicts a set of objects (e.g., one or more objects) that are associated with the user of the user device (e.g., that depicts a set of objects that are owned by the user of the user device).
- the server device may determine identification information for each object of the set of objects that are associated with the user and that are depicted in the image data.
- the identification information may indicate for each object, of the set of objects, an identifier of the object (e.g., what the object “is,” such as a table, a television, or a clock), a classification of the object (e.g., whether the object is “personal property,” a “valuable article,” or another type of property under an insurance policy), a product name associated with the object (e.g., a manufacturer and/or originator of the object), a product model associated with the object (e.g., a particular type of the object), at least one imaging location associated with the object (e.g., at least one physical location at which at least one image of the object was captured), or at least one time of imaging of the object (e.g., at least one time at which at least one image of the object was captured).
- an identifier of the object e.g., what the object “is,” such
- the server device may process, using at least one image analysis technique (e.g., at least one object detection technique, such as a single shot detector (SSD) technique, a you-only-look-once (YOLO) technique, and/or a recurrent convolutional neural network (RCNN) technique, among other examples) to determine the identification information for each object of the set of objects.
- at least one image analysis technique e.g., at least one object detection technique, such as a single shot detector (SSD) technique, a you-only-look-once (YOLO) technique, and/or a recurrent convolutional neural network (RCNN) technique, among other examples
- the server device may use a machine learning model to determine the identification information for each object of the set of objects.
- the user device may process, using the machine learning model, the image data to determine the identification information for each object of the set of objects.
- the user device may train the machine learning model based on historical data (e.g., historical image data that depicts objects) and/or additional information, such as identification information for each of the objects depicted by the historical image data.
- the server device may train the machine learning model to determine identification information for an object.
- the machine learning model may be trained and/or used in a manner similar to that described below with respect to FIG. 2 .
- the server device may process the image data to identify a subset of the image data. For example, the server device may process the image data to identify a subset of the image data that is associated with a particular location associated with the user, such as location of a home of the user. Accordingly, the server device may process the subset of the image data to determine the identification information for each object of the set of objects. In this way, the server device may determine identification information for objects associated with the particular location, such as objects associated with the home of the user.
- an exchange log may include one or more entries that indicate exchanges between the user and another party, such as a merchant, at one or more particular instants of time.
- an entry of an exchange log may indicate an exchange identifier (shown as exchange ID) of an exchange, a time of the exchange (shown as Date), an amount associated with the exchange (shown as Amount), and/or a party associated with the exchange (shown as Merchant).
- exchange ID an exchange identifier
- Date time of the exchange
- Amount an amount associated with the exchange
- Merchant a party associated with the exchange
- a first entry of the exchange log indicates that an exchange with an exchange identifier of “Exch 1” occurred on “Nov. 1, 2021” for “$19.99” between the user and “Merchant A.”
- An exchange may be associated with one or more objects.
- an exchange may be a purchase, by the user, of the one or more objects at the time of the exchange for the amount associated with the exchange from the party associated with the exchange.
- a second entry of the exchange log may indicate that the user purchased, in an exchange with an exchange identifier of “Exch 2,” one or more objects for $75.00 on “Nov.
- an exchange log may include exchange information, respectively, for one or more objects. That is, the exchange log may include an entry that indicates an object associated with an exchange, an exchange amount associated with the object, and/or a time of the exchange. For example, the entry may indicate that the user purchased a particular object (e.g., a television) for a particular amount on a particular date from a particular party.
- a particular object e.g., a television
- the server device may determine estimated exchange information for each object of a subset of objects of the set of objects that are associated with the user and that are depicted in the image data (e.g., based on the exchange data and/or the identification information).
- the subset of objects may be objects that are associated with the exchange data.
- the subset of objects may be objects that were purchased in one or more exchanges included in the exchange data. Accordingly, the server device may not (or may be unable to) determine estimated exchange information for other objects of the set of objects that are not in the subset of objects (e.g., because the server device has not obtained exchange data associated with the other objects).
- the estimated exchange information may indicate for each object, of the subset of objects, an estimated exchange amount associated with the object and/or an estimated time of exchange associated with the object.
- the server device may determine, based on the identification information for the object, at least one parameter associated with the object, such as an identifier of the object, a classification of the object, a product name of the object, and/or a product model of the object, among other examples.
- the server device may identify, based on the at least one parameter and the exchange data, an exchange event that is associated with the object.
- the server device may determine, based on a product name and/or a product model of the object (e.g., a pair of shoes associated with a particular company brand), an exchange event associated with the object (e.g., a purchase exchange at a store associated with the particular company brand).
- the server device may determine, based on the exchange data, exchange event information associated with the exchange event.
- the server device may determine an exchange amount (e.g., a purchase amount) associated with the exchange and a time of the exchange. Accordingly, the server device may determine, based on the exchange event information, the estimated exchange information for the object.
- the server device may determine the estimated exchange amount (e.g., an estimated purchase amount) associated with the object based on the exchange amount (e.g., the purchase amount) associated with the exchange (e.g., by using one or more exchange amount estimation techniques).
- the server device may determine the estimated time of exchange associated with the object to be the time of the exchange indicated by the exchange event information.
- the server device may determine estimated assessment information for each object of the subset of objects that are associated with the user and that are depicted in the image data (e.g., based on the exchange data, the identification information, and/or the estimated exchange information).
- the estimated assessment information for each object, of the subset of objects may indicate an estimated assessment amount (e.g., an estimated present value) of the object.
- the server device may use a machine learning model to determine the estimated assessment information for each object of the subset of objects. For example, the user device may process, using the machine learning model, the estimated exchange information for the object to determine the estimated assessment information for the object. As another example, the user device may process, using the machine learning model, the exchange data and/or the identification information to determine the estimated assessment information for the object. In some implementations, the user device may train the machine learning model based on historical data (e.g., historical estimated exchange information, historical exchange data, and/or historical identification information for a plurality of objects) and/or additional information, such as estimated assessment information for each of the plurality of objects.
- historical data e.g., historical estimated exchange information, historical exchange data, and/or historical identification information for a plurality of objects
- additional information such as estimated assessment information for each of the plurality of objects.
- the server device may train the machine learning model to determine estimated assessment information for an object.
- the machine learning model may be trained and/or used in a manner similar to that described below with respect to FIG. 2 .
- the server device may generate presentation information (e.g., based on the estimated assessment information) for display via a GUI.
- the presentation information may include, for each object of the subset of objects that are associated with the user and that are depicted in the image data, some or all of the identification information for the object and/or some or all of the estimated assessment information for the object.
- the presentation information may include an identifier of the object (shown as Object ID) and the estimated assessment amount of the object (shown as Amount).
- the server device may provide the GUI to the user device.
- the server device may send the GUI to the host server of the network, which may send the GUI to the user device.
- the user device may display (e.g., when running the object assessment application) the presentation information on a display screen of the user device via the GUI.
- the user device and/or the server device may cause one or more actions to be performed. For example, one of the user device or the server device may determine, based on the estimated assessment information for each object of the subset of objects, total estimated assessment information for the subset of objects. The total estimated assessment information may indicate a sum of respective estimated assessment amounts of the subset of objects. Accordingly, one of the user device or the server device may generate, based on the total estimated assessment information for the subset of objects, a recommendation, such as an insurance product recommendation (e.g., to cover the sum of the respective estimated assessment amounts of the subset of objects), for display via the GUI.
- a recommendation such as an insurance product recommendation (e.g., to cover the sum of the respective estimated assessment amounts of the subset of objects)
- one of the user device or the server device may generate and submit, based on the total estimated assessment information for the subset of objects, a document, such as an insurance claim document associated with the subset of objects (e.g., to file an insurance claim for the sum of the respective estimated assessment amounts of the subset of objects).
- a document such as an insurance claim document associated with the subset of objects (e.g., to file an insurance claim for the sum of the respective estimated assessment amounts of the subset of objects).
- one of the user device or the server device may generate the insurance claim document and communicate with another device (e.g., another server device) associated with an insurance company to submit the insurance claim document.
- FIGS. 1 A- 1 D are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIGS. 1 A- 1 D . Furthermore, two or more devices shown in FIGS. 1 A- 1 D may be implemented within a single device, or a single device shown in FIGS. 1 A- 1 D may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of one or more examples 100 may perform one or more functions described as being performed by another set of devices of one or more examples 100 . For example, the user device may perform one or more functions described as being performed by the server device, or vice versa.
- a set of devices e.g., one or more devices of one or more examples 100 may perform one or more functions described as being performed by another set of devices of one or more examples 100 .
- the user device may perform one or more functions described as being performed by the
- FIG. 2 is a diagram illustrating an example 200 of training and using a machine learning model in connection with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- the machine learning model training and usage described herein may be performed using a machine learning system.
- the machine learning system may include or may be included in a computing device, a server, a cloud computing environment, or the like, such as the server device and/or the user device described in more detail elsewhere herein.
- a machine learning model may be trained using a set of observations.
- the set of observations may be obtained from training data (e.g., historical data), such as data gathered during one or more processes described herein.
- the machine learning system may receive the set of observations (e.g., as input) from the server device and/or the user device, as described elsewhere herein.
- the set of observations includes a feature set.
- the feature set may include a set of variables, and a variable may be referred to as a feature.
- a specific observation may include a set of variable values (or feature values) corresponding to the set of variables.
- the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the server device and/or the user device. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, and/or by receiving input from an operator.
- a feature set e.g., one or more features and/or feature values
- a feature set for a set of observations may include a first feature of Exchange Amount, a second feature of Exchange Time, a third feature of Object Info, and so on.
- the first feature may have a value of $1000
- the second feature may have a value of Jun. 6, 2015
- the third feature may have a value of LCD TV, and so on.
- the set of observations may be associated with a target variable.
- the target variable may represent a variable having a numeric value, may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiples classes, classifications, or labels) and/or may represent a variable having a Boolean value.
- a target variable may be associated with a target variable value, and a target variable value may be specific to an observation. In example 200 , the target variable is Assessment Amount, which has a value of $275.00 for the first observation.
- the target variable may represent a value that a machine learning model is being trained to predict
- the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable.
- the set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value.
- a machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model.
- the machine learning model may be trained on a set of observations that do not include a target variable. This may be referred to as an unsupervised learning model.
- the machine learning model may learn patterns from the set of observations without labeling or supervision, and may provide output that indicates such patterns, such as by using clustering and/or association to identify related groups of items within the set of observations.
- the machine learning system may apply the trained machine learning model 225 to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model 225 .
- the new observation may include a first feature of $ W, a second feature of X date, a third feature of Y info, and so on, as an example.
- the machine learning system may apply the trained machine learning model 225 to the new observation to generate an output (e.g., a result).
- the type of output may depend on the type of machine learning model and/or the type of machine learning task being performed.
- the output may include a predicted value of a target variable, such as when supervised learning is employed.
- the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations, such as when unsupervised learning is employed.
- the trained machine learning model 225 may predict a value of $ Z for the target variable of Assessment Amount for the new observation, as shown by reference number 235 . Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), among other examples.
- the first recommendation may include, for example, a recommendation to provide the Assessment Amount to the user device via GUI.
- the first automated action may include, for example, generating presentation information that includes the Assessment Amount for display via a GUI.
- the trained machine learning model 225 may classify (e.g., cluster) the new observation in a cluster, as shown by reference number 240 .
- the observations within a cluster may have a threshold degree of similarity.
- the machine learning system may provide a first recommendation, such as the first recommendation described above.
- the machine learning system may perform a first automated action and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action) based on classifying the new observation in the first cluster, such as the first automated action described above.
- the recommendation and/or the automated action associated with the new observation may be based on a target variable value having a particular label (e.g., classification or categorization), may be based on whether a target variable value satisfies one or more threshold (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, or the like), and/or may be based on a cluster in which the new observation is classified.
- a target variable value having a particular label e.g., classification or categorization
- a threshold e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, or the like
- the machine learning system may apply a rigorous and automated process to determining estimated assessment information for an object.
- the machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining estimated assessment information for an object relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine estimated assessment information for an object using the features or feature values.
- FIG. 2 is provided as an example. Other examples may differ from what is described in connection with FIG. 2 .
- FIG. 3 is a diagram of an example environment 300 in which systems and/or methods described herein may be implemented.
- environment 300 may include a user device 310 , a server device 320 , a host server 330 , and a network 340 .
- Devices of environment 300 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections.
- the user device 310 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with generating presentation information for display via a GUI, as described elsewhere herein.
- the user device 310 may include a communication device and/or a computing device.
- the user device 310 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.
- the server device 320 includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with generating presentation information for display via a GUI, as described elsewhere herein.
- the server device 320 may include a communication device and/or a computing device.
- the server device 320 may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system.
- the server device 320 includes computing hardware used in a cloud computing environment.
- the host server 330 includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with generating presentation information for display via a GUI, as described elsewhere herein.
- the host server 330 may include a communication device and/or a computing device, such as a server device.
- the host server 330 may include a server, such as an application server, a web server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system.
- the host server 330 includes computing hardware used in a cloud computing environment.
- the server device 320 is implemented on and integrated with the host server 330 (e.g., to grant or deny access to resources hosted or served by the host server 330 ).
- the network 340 includes one or more wired and/or wireless networks.
- the network 340 may include a cellular network, a public land mobile network, a local area network, a wide area network, a metropolitan area network, a telephone network, a private network, the Internet, and/or a combination of these or other types of networks.
- the network 340 enables communication among the devices of environment 300 .
- the number and arrangement of devices and networks shown in FIG. 3 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 3 . Furthermore, two or more devices shown in FIG. 3 may be implemented within a single device, or a single device shown in FIG. 3 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment 300 may perform one or more functions described as being performed by another set of devices of environment 300 .
- FIG. 4 is a diagram of example components of a device 400 , which may correspond to the user device 310 , the server device 320 , and/or the host server 330 .
- the user device 310 , the server device 320 , and/or the host server 330 may include one or more devices 400 and/or one or more components of device 400 .
- device 400 may include a bus 410 , a processor 420 , a memory 430 , an input component 440 , an output component 450 , and a communication component 460 .
- Bus 410 includes one or more components that enable wired and/or wireless communication among the components of device 400 .
- Bus 410 may couple together two or more components of FIG. 4 , such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling.
- Processor 420 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component.
- Processor 420 is implemented in hardware, firmware, or a combination of hardware and software.
- processor 420 includes one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein.
- Memory 430 includes volatile and/or nonvolatile memory.
- memory 430 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory).
- Memory 430 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection).
- Memory 430 may be a non-transitory computer-readable medium.
- Memory 430 stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of device 400 .
- memory 430 includes one or more memories that are coupled to one or more processors (e.g., processor 420 ), such as via bus 410 .
- Input component 440 enables device 400 to receive input, such as user input and/or sensed input.
- input component 440 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator.
- Output component 450 enables device 400 to provide output, such as via a display, a speaker, and/or a light-emitting diode.
- Communication component 460 enables device 400 to communicate with other devices via a wired connection and/or a wireless connection.
- communication component 460 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
- Device 400 may perform one or more operations or processes described herein.
- a non-transitory computer-readable medium e.g., memory 430
- Processor 420 may execute the set of instructions to perform one or more operations or processes described herein.
- execution of the set of instructions, by one or more processors 420 causes the one or more processors 420 and/or the device 400 to perform one or more operations or processes described herein.
- hardwired circuitry is used instead of or in combination with the instructions to perform one or more operations or processes described herein.
- processor 420 may be configured to perform one or more operations or processes described herein.
- implementations described herein are not limited to any specific combination of hardware circuitry and software.
- Device 400 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 4 . Additionally, or alternatively, a set of components (e.g., one or more components) of device 400 may perform one or more functions described as being performed by another set of components of device 400 .
- FIG. 5 is a flowchart of an example process 500 associated with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface.
- one or more process blocks of FIG. 5 may be performed by a device (e.g., the user device 310 or the server device 320 ).
- one or more process blocks of FIG. 5 may be performed by another device or a group of devices separate from or including the device.
- one or more process blocks of FIG. 5 may be performed by one or more components of device 400 , such as processor 420 , memory 430 , input component 440 , output component 450 , and/or communication component 460 .
- process 500 may include obtaining image data that depicts a set of objects associated with a user (block 510 ). As further shown in FIG. 5 , process 500 may include processing, using at least one image analysis technique, the image data to determine identification information for each object of the set of objects (block 520 ). As further shown in FIG. 5 , process 500 may include obtaining exchange data related to at least one exchange log of the user (block 530 ). As further shown in FIG. 5 , process 500 may include determining estimated exchange information for each object of a subset of objects of the set of objects (block 540 ). As further shown in FIG. 5 , process 500 may include determining estimated assessment information for each object of the subset of objects (block 550 ). As further shown in FIG. 5 , process 500 may include generating, based on the estimated assessment information, the presentation information for display via the GUI (block 560 ).
- process 500 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 5 . Additionally, or alternatively, two or more of the blocks of process 500 may be performed in parallel.
- the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
- satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
- “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
- the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Human Computer Interaction (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Game Theory and Decision Science (AREA)
- Technology Law (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A server device may be configured to obtain image data that depicts a set of objects associated with a user. The server device may be configured to process, using at least one image analysis technique, the image data to determine identification information for each object of the set of objects. The server device may be configured to obtain exchange data related to at least one exchange log of the user and may be configured to determine, based on the exchange data and the identification information, estimated exchange information for each object of a subset of objects of the set of objects. The server device may be configured to determine, based on the estimated exchange information, estimated assessment information for each object of the subset of objects and may be configured to generate, based on the estimated assessment information, presentation information for display via a graphical user interface (GUI).
Description
- This application is a continuation of U.S. patent application Ser. No. 17/644,428, filed Dec. 15, 2021, which is incorporated herein by reference in its entirety.
- A display of a user device may display a user interface (e.g., a graphical user interface). A user interface may permit interactions between a user of the user device and the user device. In some cases, the user may interact with the user interface to operate and/or control the user device to produce a desired result. For example, the user may interact with the user interface of the user device to cause the user device to perform an action. Additionally, the user interface may provide information to the user.
- Some implementations described herein relate to a server device for generating presentation information for display via a graphical user interface (GUI). The server device may include one or more memories and one or more processors communicatively coupled to the one or more memories. The server device may be configured to obtain image data that depicts a set of objects associated with a user. The server device may be configured to process, using at least one image analysis technique, the image data to determine identification information for each object of the set of objects. The server device may be configured to obtain exchange data related to at least one exchange log of the user. The server device may be configured to determine, based on the exchange data and the identification information, estimated exchange information for each object of a subset of objects of the set of objects. The server device may be configured to determine, based on the estimated exchange information, estimated assessment information for each object of the subset of objects. The server device may be configured to generate, based on the estimated assessment information, the presentation information for display via the GUI.
- Some implementations described herein relate to a non-transitory computer-readable medium that stores a set of instructions for a device. The set of instructions, when executed by one or more processors of the device, may cause the device to obtain image data that depicts a set of objects associated with a user. The set of instructions, when executed by one or more processors of the device, may cause the device to process the image data to determine identification information for each object of the set of objects. The set of instructions, when executed by one or more processors of the device, may cause the device to obtain exchange data related to at least one exchange log of the user. The set of instructions, when executed by one or more processors of the device, may cause the device to determine, based on the exchange data and the identification information, estimated assessment information for each object of a subset of objects of the set of objects. The set of instructions, when executed by one or more processors of the device, may cause the device to provide at least some of the estimated assessment information for display via a graphical user interface GUI.
- Some implementations described herein relate to a method of generating presentation information for display via a GUI. The method may include obtaining, by a device, image data that depicts an object associated with a user. The method may include processing, by the device, the image data to determine identification information associated with the object. The method may include obtaining, by the device, exchange data related to at least one exchange log of the user. The method may include determining, by the device and based on the exchange data and the identification information associated with the object, estimated assessment information associated with the object. The method may include generating, by the device and based on the estimated assessment information, the presentation information for display via the GUI.
-
FIGS. 1A-1D are diagrams of an example implementation relating to generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. -
FIG. 2 is a diagram illustrating an example of training and using a machine learning model in connection with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. -
FIG. 3 is a diagram of an example environment in which systems and/or methods described herein may be implemented. -
FIG. 4 is a diagram of example components of one or more devices ofFIG. 3 . -
FIG. 5 is a flowchart of an example process relating to generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. - The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
- A person may need to determine an estimated assessment amount (e.g., an estimated value) of one or more objects that the person owns. For example, when shopping for an insurance policy (e.g., a homeowner's insurance policy or a renter's insurance policy), the person may need to determine the estimated assessment information to determine an amount of insurance coverage that is needed to cover the one or more objects (e.g., as “personal property” or “valuable articles”). In an additional example, when submitting an insurance claim for damage to or destruction of the one or more objects, the person may need to determine the estimated assessment information to determine an amount of the insurance claim.
- In some cases, the person may use a software program or a website application to itemize and estimate assessment amounts of the one or more objects. However, this requires an excessive use of computing resources (e.g., processing resources, memory resources, communication resources, and/or power resources, among other examples) of a user device, one or more server devices, or other devices, for the user to interact with a GUI (e.g., that is presented on the user device via the software program or the website application) for a user to input this information. For example, computing resources are used for the person to navigate and interact with a large number of pages, fields, menus, and/or other elements to input information that identifies the one or more objects, that links the one or more objects to exchange information (e.g., receipts of purchase exchanges for the one or more objects), that estimates assessment values, and/or that provides other information associated with the one or more objects. This superfluous navigation and interaction with a GUI typically creates a poor user experience for the person as well.
- Some implementations described herein provide a device (e.g., a user device and/or server device) that obtains image data associated with one or more images. The one or more images may have been captured by at least one camera of the user device and may depict scenes associated with a user of the user device. For example, the one or more images may depict “selfie” images of the user, images of friends of the user, and/or images of events or parties attend by the user, among other examples. Importantly, the one or more images may depict objects, such as personal objects of the user that the user purchased. For example, a selfie image of the user may include a depiction of furniture or a kitchen appliance that is owned by the user. In some implementations, the device may process the image data to determine identification information associated with a set of objects that are depicted in the image data. For example, the device may process the selfie image described above to identify the furniture and/or the kitchen appliance (but not the user).
- In some implementations, the device may obtain exchange data related to at least one exchange log of the user (e.g., data that indicates one or more exchanges, such as purchase exchanges, of the user). Accordingly, the device, based on the identification information associated with the set of objects and the exchange data, may determine estimated assessment information for each object of the set of objects. The estimated assessment information for each object, of the set of objects, may indicate an estimated assessment amount (e.g., an estimated present value) of the object. The device then may generate presentation information (e.g., based on the estimated assessment information) for display via a GUI (e.g., on a display of the user device). The presentation information may include, for each object of the set of objects that are associated with the user and that are depicted in the image data, some or all of the identification information for the object and/or some or all of the estimated assessment information for the object. In relation to the example described above, the presentation information may identify estimated assessment amounts for the furniture and the kitchen appliance, respectively.
- In this way, some implementations described herein automatically compile an inventory of objects and estimated assessment amounts as presentation information and provide the presentation information for display via a GUI. This reduces utilization of computing resources by devices that would otherwise be needed to compile the presentation information and/or to present an interface for entering the presentation information (e.g., as described above in relation to a software program or a website application). Further, a user experience is improved by automatically compiling and providing the presentation information for display via the GUI and thereby minimizing an amount of navigation and/or interaction by the user to input and/or generate the presentation information. Additionally, the presentation information can be used, for example, to identify an insurance policy (e.g., a home insurance policy) that adequately covers objects and/or to automatically prepare an insurance claim when the objects have been damaged or destroyed by an insured event.
-
FIGS. 1A-1D are diagrams of an example 100 associated with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. As shown inFIGS. 1A-1D , example 100 includes a user device, a server device, and a host server. These devices are described in more detail in connection withFIGS. 3 and 4 . - As shown in
FIG. 1A , and byreference number 105, the user device may capture one or more images. For example, the user device may include at least one camera and a user of the user device may interact with the user device (e.g., via an image capture application of the user device) to cause the at least one camera to capture the one or more images. Accordingly, the user device may store the one or more images in a data structure that is configured to store images captured by the at least one camera of the user device, such as an electronic photo album or another type of image repository. The data structure may be included in the user device, or may be included in another device that is accessible to the user device (e.g., via a network), such as the server device. Additionally, or alternatively, the user device may upload the one or more images to a device (e.g., another server device) associated with an online service, such as a social media service. For example, the user of the user device may interact with the user device (e.g., via an online service application) to log in to an online service account and may cause the user device to upload the one or more images to the device associated with the online service account. - In some implementations, each image, of the one or more images, may be associated with a scene (e.g., that is within a field of view (FOV) of the at least one camera of the user device) at a physical location. Accordingly, the image may depict, for example, one or more people, one or more objects, one or more animals, and/or one or more structures (e.g., buildings) in the scene at the physical location. In a specific example, as shown in
FIG. 1A , an image may depict two people and multiple objects, such as a table, a television, and a clock. Over a period of time (e.g., a period of hours, days, weeks, months, or years), the user device may capture multiple images of different scenes. Accordingly, image data associated with the multiple images may depict multiple scenes, multiple people, multiple objects, multiple animals, and/or multiple structures at multiple different instants of time during the period of time. In some implementations, the image data may depict a set of objects (e.g., one or more objects of the multiple objects) that are associated with a particular person, such as the user of the user device. For example, the image data may depict a set of objects that are owned by the user. - In some implementations, the user device may include an object assessment application that is configured to facilitate identifying and assessing objects (e.g., identifying and determining a monetary value of objects) depicted in image data. Accordingly, as shown in
FIG. 1B , and byreference number 110, the user of the user device may interact with the user device (e.g., via the object assessment application) to cause the user device to obtain image data. For example, the user device may identify the data structure that is configured to store images captured by the at least one camera of the user device (e.g., the electronic photo album) and may obtain, from the data structure, image data that is associated with all of the images stored in the data structure. Alternatively, the user may interact with the user device (via the object assessment application) to select a set of images (e.g., one or more images) of all of the images stored in the data structure, which may cause the user device to obtain, from the data structure, image data that is associated with the set of images. - As shown by
reference number 115, the user device may send the image data (e.g., that was obtained by the user device) to the server device (e.g., to facilitate identifying and assessing objects depicted in the image data). For example, the user device may send the image data to the server device by sending the image data to a host server of a network, which may send the image data to the server device. Alternatively, the server device may obtain the image data from an online service (e.g., a social media service). For example, the user of the user device may interact with the user device (e.g., via the object assessment application) to provide, to the server device, information indicating an online service account of the user (e.g., information indicating a social media profile of the user). Accordingly, the server device may communicate with a device associated with the online service account (e.g., another server device) to obtain the image data (e.g., to download image data associated with images posted to the social media profile of the user). In this way, the server device may obtain image data that depicts a set of objects (e.g., one or more objects) that are associated with the user of the user device (e.g., that depicts a set of objects that are owned by the user of the user device). - As shown by
reference number 120, the server device may determine identification information for each object of the set of objects that are associated with the user and that are depicted in the image data. The identification information may indicate for each object, of the set of objects, an identifier of the object (e.g., what the object “is,” such as a table, a television, or a clock), a classification of the object (e.g., whether the object is “personal property,” a “valuable article,” or another type of property under an insurance policy), a product name associated with the object (e.g., a manufacturer and/or originator of the object), a product model associated with the object (e.g., a particular type of the object), at least one imaging location associated with the object (e.g., at least one physical location at which at least one image of the object was captured), or at least one time of imaging of the object (e.g., at least one time at which at least one image of the object was captured). - In some implementations, the server device may process, using at least one image analysis technique (e.g., at least one object detection technique, such as a single shot detector (SSD) technique, a you-only-look-once (YOLO) technique, and/or a recurrent convolutional neural network (RCNN) technique, among other examples) to determine the identification information for each object of the set of objects. Additionally, or alternatively, the server device may use a machine learning model to determine the identification information for each object of the set of objects. For example, the user device may process, using the machine learning model, the image data to determine the identification information for each object of the set of objects. In some implementations, the user device may train the machine learning model based on historical data (e.g., historical image data that depicts objects) and/or additional information, such as identification information for each of the objects depicted by the historical image data. Using the historical data and/or the additional information as inputs to the machine learning model, the server device may train the machine learning model to determine identification information for an object. In some implementations, the machine learning model may be trained and/or used in a manner similar to that described below with respect to
FIG. 2 . - In some implementations, the server device may process the image data to identify a subset of the image data. For example, the server device may process the image data to identify a subset of the image data that is associated with a particular location associated with the user, such as location of a home of the user. Accordingly, the server device may process the subset of the image data to determine the identification information for each object of the set of objects. In this way, the server device may determine identification information for objects associated with the particular location, such as objects associated with the home of the user.
- As shown in
FIG. 1C , the user may be associated with at least one exchange log that is stored in data structure that is included in or accessible to the server device. An exchange log may include one or more entries that indicate exchanges between the user and another party, such as a merchant, at one or more particular instants of time. For example, as shown inFIG. 1C , an entry of an exchange log may indicate an exchange identifier (shown as exchange ID) of an exchange, a time of the exchange (shown as Date), an amount associated with the exchange (shown as Amount), and/or a party associated with the exchange (shown as Merchant). In a specific example, as further shown inFIG. 1C , a first entry of the exchange log indicates that an exchange with an exchange identifier of “Exch 1” occurred on “Nov. 1, 2021” for “$19.99” between the user and “Merchant A.” An exchange may be associated with one or more objects. For example, an exchange may be a purchase, by the user, of the one or more objects at the time of the exchange for the amount associated with the exchange from the party associated with the exchange. In a specific example, as further shown inFIG. 1C , a second entry of the exchange log may indicate that the user purchased, in an exchange with an exchange identifier of “Exch 2,” one or more objects for $75.00 on “Nov. 12, 2021” from the “Merchant B.” In some implementations, an exchange log may include exchange information, respectively, for one or more objects. That is, the exchange log may include an entry that indicates an object associated with an exchange, an exchange amount associated with the object, and/or a time of the exchange. For example, the entry may indicate that the user purchased a particular object (e.g., a television) for a particular amount on a particular date from a particular party. - As further shown in
FIG. 1C , and byreference number 125, the server device may obtain exchange data related to the at least one exchange log of the user. For example, the server device may communicate with the user device to obtain at least one authentication credential (e.g., a username and/or password, a security token, and/or at least one other type of credential) to access the at least one exchange log of the user. Accordingly, the server device may communicate (e.g., using the at least one authentication credential) with at least one data structure (e.g., a database, an electronic folder, and/or an electronic file that stores the at least one exchange log) to access the at least one exchange log and may thereby obtain the exchange data (e.g., by reading the at least one exchange log). The exchange data may include and/or indicate the information included in the at least one exchange log (e.g., the information indicated by the entries of the at least one exchange log). - As shown by
reference number 130, the server device may determine estimated exchange information for each object of a subset of objects of the set of objects that are associated with the user and that are depicted in the image data (e.g., based on the exchange data and/or the identification information). The subset of objects may be objects that are associated with the exchange data. For example, the subset of objects may be objects that were purchased in one or more exchanges included in the exchange data. Accordingly, the server device may not (or may be unable to) determine estimated exchange information for other objects of the set of objects that are not in the subset of objects (e.g., because the server device has not obtained exchange data associated with the other objects). - The estimated exchange information may indicate for each object, of the subset of objects, an estimated exchange amount associated with the object and/or an estimated time of exchange associated with the object. For example, the server device may determine, based on the identification information for the object, at least one parameter associated with the object, such as an identifier of the object, a classification of the object, a product name of the object, and/or a product model of the object, among other examples. The server device may identify, based on the at least one parameter and the exchange data, an exchange event that is associated with the object. For example, the server device may determine, based on a product name and/or a product model of the object (e.g., a pair of shoes associated with a particular company brand), an exchange event associated with the object (e.g., a purchase exchange at a store associated with the particular company brand). The server device may determine, based on the exchange data, exchange event information associated with the exchange event. For example, the server device may determine an exchange amount (e.g., a purchase amount) associated with the exchange and a time of the exchange. Accordingly, the server device may determine, based on the exchange event information, the estimated exchange information for the object. For example, the server device may determine the estimated exchange amount (e.g., an estimated purchase amount) associated with the object based on the exchange amount (e.g., the purchase amount) associated with the exchange (e.g., by using one or more exchange amount estimation techniques). As another example, the server device may determine the estimated time of exchange associated with the object to be the time of the exchange indicated by the exchange event information.
- As shown by
reference number 135, the server device may determine estimated assessment information for each object of the subset of objects that are associated with the user and that are depicted in the image data (e.g., based on the exchange data, the identification information, and/or the estimated exchange information). The estimated assessment information for each object, of the subset of objects, may indicate an estimated assessment amount (e.g., an estimated present value) of the object. - In some implementations, the server device may use a machine learning model to determine the estimated assessment information for each object of the subset of objects. For example, the user device may process, using the machine learning model, the estimated exchange information for the object to determine the estimated assessment information for the object. As another example, the user device may process, using the machine learning model, the exchange data and/or the identification information to determine the estimated assessment information for the object. In some implementations, the user device may train the machine learning model based on historical data (e.g., historical estimated exchange information, historical exchange data, and/or historical identification information for a plurality of objects) and/or additional information, such as estimated assessment information for each of the plurality of objects. Using the historical data and/or the additional information as inputs to the machine learning model, the server device may train the machine learning model to determine estimated assessment information for an object. In some implementations, the machine learning model may be trained and/or used in a manner similar to that described below with respect to
FIG. 2 . - As shown in
FIG. 1D , and byreference number 140, the server device may generate presentation information (e.g., based on the estimated assessment information) for display via a GUI. The presentation information may include, for each object of the subset of objects that are associated with the user and that are depicted in the image data, some or all of the identification information for the object and/or some or all of the estimated assessment information for the object. For example, as shown inFIG. 1D , the presentation information may include an identifier of the object (shown as Object ID) and the estimated assessment amount of the object (shown as Amount). - As shown by
reference number 145, the server device may provide the GUI to the user device. For example, the server device may send the GUI to the host server of the network, which may send the GUI to the user device. As shown byreference number 150, the user device may display (e.g., when running the object assessment application) the presentation information on a display screen of the user device via the GUI. - As further shown in
FIG. 1D , and byreference number 155, the user device and/or the server device may cause one or more actions to be performed. For example, one of the user device or the server device may determine, based on the estimated assessment information for each object of the subset of objects, total estimated assessment information for the subset of objects. The total estimated assessment information may indicate a sum of respective estimated assessment amounts of the subset of objects. Accordingly, one of the user device or the server device may generate, based on the total estimated assessment information for the subset of objects, a recommendation, such as an insurance product recommendation (e.g., to cover the sum of the respective estimated assessment amounts of the subset of objects), for display via the GUI. Alternatively, one of the user device or the server device may generate and submit, based on the total estimated assessment information for the subset of objects, a document, such as an insurance claim document associated with the subset of objects (e.g., to file an insurance claim for the sum of the respective estimated assessment amounts of the subset of objects). For example, one of the user device or the server device may generate the insurance claim document and communicate with another device (e.g., another server device) associated with an insurance company to submit the insurance claim document. - As indicated above,
FIGS. 1A-1D are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIGS. 1A-1D . Furthermore, two or more devices shown inFIGS. 1A-1D may be implemented within a single device, or a single device shown inFIGS. 1A-1D may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of one or more examples 100 may perform one or more functions described as being performed by another set of devices of one or more examples 100. For example, the user device may perform one or more functions described as being performed by the server device, or vice versa. -
FIG. 2 is a diagram illustrating an example 200 of training and using a machine learning model in connection with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. The machine learning model training and usage described herein may be performed using a machine learning system. The machine learning system may include or may be included in a computing device, a server, a cloud computing environment, or the like, such as the server device and/or the user device described in more detail elsewhere herein. - As shown by
reference number 205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from training data (e.g., historical data), such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the server device and/or the user device, as described elsewhere herein. - As shown by
reference number 210, the set of observations includes a feature set. The feature set may include a set of variables, and a variable may be referred to as a feature. A specific observation may include a set of variable values (or feature values) corresponding to the set of variables. In some implementations, the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the server device and/or the user device. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, and/or by receiving input from an operator. - As an example, a feature set for a set of observations may include a first feature of Exchange Amount, a second feature of Exchange Time, a third feature of Object Info, and so on. As shown, for a first observation, the first feature may have a value of $1000, the second feature may have a value of Jun. 6, 2015, the third feature may have a value of LCD TV, and so on. These features and feature values are provided as examples, and may differ in other examples.
- As shown by
reference number 215, the set of observations may be associated with a target variable. The target variable may represent a variable having a numeric value, may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiples classes, classifications, or labels) and/or may represent a variable having a Boolean value. A target variable may be associated with a target variable value, and a target variable value may be specific to an observation. In example 200, the target variable is Assessment Amount, which has a value of $275.00 for the first observation. - The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model.
- In some implementations, the machine learning model may be trained on a set of observations that do not include a target variable. This may be referred to as an unsupervised learning model. In this case, the machine learning model may learn patterns from the set of observations without labeling or supervision, and may provide output that indicates such patterns, such as by using clustering and/or association to identify related groups of items within the set of observations.
- As shown by
reference number 220, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm, a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trainedmachine learning model 225 to be used to analyze new observations. - As shown by
reference number 230, the machine learning system may apply the trainedmachine learning model 225 to a new observation, such as by receiving a new observation and inputting the new observation to the trainedmachine learning model 225. As shown, the new observation may include a first feature of $ W, a second feature of X date, a third feature of Y info, and so on, as an example. The machine learning system may apply the trainedmachine learning model 225 to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations, such as when unsupervised learning is employed. - As an example, the trained
machine learning model 225 may predict a value of $ Z for the target variable of Assessment Amount for the new observation, as shown byreference number 235. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), among other examples. The first recommendation may include, for example, a recommendation to provide the Assessment Amount to the user device via GUI. The first automated action may include, for example, generating presentation information that includes the Assessment Amount for display via a GUI. - In some implementations, the trained
machine learning model 225 may classify (e.g., cluster) the new observation in a cluster, as shown byreference number 240. The observations within a cluster may have a threshold degree of similarity. As an example, if the machine learning system classifies the new observation in a first cluster (e.g., a particular Assessment Amount categorization group), then the machine learning system may provide a first recommendation, such as the first recommendation described above. Additionally, or alternatively, the machine learning system may perform a first automated action and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action) based on classifying the new observation in the first cluster, such as the first automated action described above. - In some implementations, the recommendation and/or the automated action associated with the new observation may be based on a target variable value having a particular label (e.g., classification or categorization), may be based on whether a target variable value satisfies one or more threshold (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, or the like), and/or may be based on a cluster in which the new observation is classified.
- In this way, the machine learning system may apply a rigorous and automated process to determining estimated assessment information for an object. The machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining estimated assessment information for an object relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine estimated assessment information for an object using the features or feature values.
- As indicated above,
FIG. 2 is provided as an example. Other examples may differ from what is described in connection withFIG. 2 . -
FIG. 3 is a diagram of anexample environment 300 in which systems and/or methods described herein may be implemented. As shown inFIG. 3 ,environment 300 may include auser device 310, aserver device 320, ahost server 330, and anetwork 340. Devices ofenvironment 300 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. - The
user device 310 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with generating presentation information for display via a GUI, as described elsewhere herein. Theuser device 310 may include a communication device and/or a computing device. For example, theuser device 310 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device. - The
server device 320 includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with generating presentation information for display via a GUI, as described elsewhere herein. Theserver device 320 may include a communication device and/or a computing device. For example, theserver device 320 may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system. In some implementations, theserver device 320 includes computing hardware used in a cloud computing environment. - The
host server 330 includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with generating presentation information for display via a GUI, as described elsewhere herein. Thehost server 330 may include a communication device and/or a computing device, such as a server device. For example, thehost server 330 may include a server, such as an application server, a web server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system. In some implementations, thehost server 330 includes computing hardware used in a cloud computing environment. In some implementations, theserver device 320 is implemented on and integrated with the host server 330 (e.g., to grant or deny access to resources hosted or served by the host server 330). - The
network 340 includes one or more wired and/or wireless networks. For example, thenetwork 340 may include a cellular network, a public land mobile network, a local area network, a wide area network, a metropolitan area network, a telephone network, a private network, the Internet, and/or a combination of these or other types of networks. Thenetwork 340 enables communication among the devices ofenvironment 300. - The number and arrangement of devices and networks shown in
FIG. 3 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG. 3 . Furthermore, two or more devices shown inFIG. 3 may be implemented within a single device, or a single device shown inFIG. 3 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) ofenvironment 300 may perform one or more functions described as being performed by another set of devices ofenvironment 300. -
FIG. 4 is a diagram of example components of adevice 400, which may correspond to theuser device 310, theserver device 320, and/or thehost server 330. In some implementations, theuser device 310, theserver device 320, and/or thehost server 330 may include one ormore devices 400 and/or one or more components ofdevice 400. As shown inFIG. 4 ,device 400 may include abus 410, aprocessor 420, amemory 430, aninput component 440, anoutput component 450, and acommunication component 460. -
Bus 410 includes one or more components that enable wired and/or wireless communication among the components ofdevice 400.Bus 410 may couple together two or more components ofFIG. 4 , such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling.Processor 420 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component.Processor 420 is implemented in hardware, firmware, or a combination of hardware and software. In some implementations,processor 420 includes one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein. -
Memory 430 includes volatile and/or nonvolatile memory. For example,memory 430 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory).Memory 430 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection).Memory 430 may be a non-transitory computer-readable medium.Memory 430 stores information, instructions, and/or software (e.g., one or more software applications) related to the operation ofdevice 400. In some implementations,memory 430 includes one or more memories that are coupled to one or more processors (e.g., processor 420), such as viabus 410. -
Input component 440 enablesdevice 400 to receive input, such as user input and/or sensed input. For example,input component 440 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator.Output component 450 enablesdevice 400 to provide output, such as via a display, a speaker, and/or a light-emitting diode.Communication component 460 enablesdevice 400 to communicate with other devices via a wired connection and/or a wireless connection. For example,communication component 460 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna. -
Device 400 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 430) may store a set of instructions (e.g., one or more instructions or code) for execution byprocessor 420.Processor 420 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one ormore processors 420, causes the one ormore processors 420 and/or thedevice 400 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry is used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively,processor 420 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. - The number and arrangement of components shown in
FIG. 4 are provided as an example.Device 400 may include additional components, fewer components, different components, or differently arranged components than those shown inFIG. 4 . Additionally, or alternatively, a set of components (e.g., one or more components) ofdevice 400 may perform one or more functions described as being performed by another set of components ofdevice 400. -
FIG. 5 is a flowchart of anexample process 500 associated with generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface. In some implementations, one or more process blocks ofFIG. 5 may be performed by a device (e.g., theuser device 310 or the server device 320). In some implementations, one or more process blocks ofFIG. 5 may be performed by another device or a group of devices separate from or including the device. Additionally, or alternatively, one or more process blocks ofFIG. 5 may be performed by one or more components ofdevice 400, such asprocessor 420,memory 430,input component 440,output component 450, and/orcommunication component 460. - As shown in
FIG. 5 ,process 500 may include obtaining image data that depicts a set of objects associated with a user (block 510). As further shown inFIG. 5 ,process 500 may include processing, using at least one image analysis technique, the image data to determine identification information for each object of the set of objects (block 520). As further shown inFIG. 5 ,process 500 may include obtaining exchange data related to at least one exchange log of the user (block 530). As further shown inFIG. 5 ,process 500 may include determining estimated exchange information for each object of a subset of objects of the set of objects (block 540). As further shown inFIG. 5 ,process 500 may include determining estimated assessment information for each object of the subset of objects (block 550). As further shown inFIG. 5 ,process 500 may include generating, based on the estimated assessment information, the presentation information for display via the GUI (block 560). - Although
FIG. 5 shows example blocks ofprocess 500, in some implementations,process 500 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG. 5 . Additionally, or alternatively, two or more of the blocks ofprocess 500 may be performed in parallel. - The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations.
- As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
- As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
- Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
- No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
Claims (20)
1. A device, comprising:
one or more memories; and
one or more processors, coupled to the one or more memories, configured to:
determine estimated exchange information for an object;
process the estimated exchange information to determine estimated assessment information for the object based on determining a threshold degree of similarity for the estimated assessment information to a particular assessment amount categorization group; and
generate presentation information for display via a graphical user interface based on the similarity of the estimated assessment information to the particular assessment amount categorization group.
2. The device of claim 1 , wherein the estimated exchange information for the object indicates an estimated time of exchange associated with the object.
3. The device of claim 1 , wherein the estimated exchange information for the object indicates an estimated exchange amount associated with the object.
4. The device of claim 1 , wherein the one or more processors are further configured to:
obtain image data for the object from a user device; and
wherein the one or more processors configured to determine the estimated exchange information for the object are configured to:
determine the estimated exchange information for the object based on the image data.
5. The device of claim 1 , wherein the one or more processors, when determining the estimated exchange information for the object, are configured to:
identify an exchange event that is associated with the object;
determine exchange event information associated with the exchange event; and
determine, based on the exchange event information, the estimated exchange information for the object.
6. The device of claim 1 , wherein the one or more processors are further configured to:
process, using an image analysis technique, image data that depicts a subject of the object to determine identification information for the object; and
wherein the one or more processors configured to determine the estimated exchange information for the object are configured to:
determine the estimated exchange information for the object based on the identification information.
7. The device of claim 1 , wherein the one or more processors are further configured to:
generate, based on the estimated assessment information, a recommendation for display via the graphical user interface.
8. A method, comprising:
determining, by a device, estimated exchange information for an object of a subset of objects;
determining, by the device and based on the estimated exchange information, estimated assessment information for the object based on a threshold degree of similarity of the estimated assessment information to a particular assessment amount categorization group; and
generating, by the device, presentation information for display via a graphical user interface based on the degree of similarity of the estimated assessment information to the particular assessment amount categorization group.
9. The method of claim 8 , wherein determining the estimated exchange information for the object comprises:
determining the estimated exchange information based on image data.
10. The method of claim 8 , wherein determining the estimated assessment information for the object comprises:
determining the estimated assessment information for the object based on a machine learning model.
11. The method of claim 8 , wherein the estimated exchange information indicates an estimated exchange amount associated with the object and an estimated time of exchange associated with the object.
12. The method of claim 8 , wherein the estimated exchange information is based on one or more of:
a product name of the object, or
a product model of the object.
13. The method of claim 8 , further comprising:
processing, using an image analysis technique, image data that depicts a subject of the object to determine identification information for the object; and
wherein determining the estimated exchange information for the object comprises:
determining the estimated exchange information for the object based on the identification information.
14. The method of claim 8 , further comprising:
generating, based on the estimated assessment information for the object, a recommendation for display via the graphical user interface.
15. A non-transitory computer-readable medium storing instructions, the instructions comprising:
one or more instructions that, when executed by one or more processors, cause the one or more processors to:
determine estimated exchange information for an object;
determine estimated assessment information for the object based on a threshold degree of similarity of the estimated assessment information to a particular cluster,
wherein the particular cluster includes the object and one or more related objects; and
generate presentation information for display based on the similarity of the estimated assessment information to the particular cluster.
16. The non-transitory computer-readable medium of claim 15 , wherein the estimated exchange information is based on one or more of:
a product name of the object, or
a product model of the object.
17. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, when executed by the one or more processors, further cause the one or more processors to:
obtain exchange data related to an exchange log; and
determine, based on the exchange data, the estimated exchange information for the object.
18. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, when executed by the one or more processors, further cause the one or more processors to:
generate, based on the estimated assessment information, a recommendation for display via a graphical user interface.
19. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, when executed by the one or more processors, further cause the one or more processors to:
obtain image data for the object from a user device; and
wherein the one or more instructions, that cause the one or more processors to determine the estimated exchange information for the object, cause the one or more processors to:
determine the estimated exchange information for the object based on the image data.
20. The non-transitory computer-readable medium of claim 19 , wherein identification information associated with the image data for the object indicates at least one of:
an identifier of the object,
a classification of the object,
a product name associated with the object,
a product model associated the object, or
at least one imaging location associated with the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/341,155 US20230333720A1 (en) | 2021-12-15 | 2023-06-26 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/644,428 US11714532B2 (en) | 2021-12-15 | 2021-12-15 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
US18/341,155 US20230333720A1 (en) | 2021-12-15 | 2023-06-26 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/644,428 Continuation US11714532B2 (en) | 2021-12-15 | 2021-12-15 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230333720A1 true US20230333720A1 (en) | 2023-10-19 |
Family
ID=86695582
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/644,428 Active US11714532B2 (en) | 2021-12-15 | 2021-12-15 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
US18/341,155 Pending US20230333720A1 (en) | 2021-12-15 | 2023-06-26 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/644,428 Active US11714532B2 (en) | 2021-12-15 | 2021-12-15 | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Country Status (1)
Country | Link |
---|---|
US (2) | US11714532B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11714532B2 (en) * | 2021-12-15 | 2023-08-01 | Capital One Services, Llc | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8219558B1 (en) * | 2008-04-25 | 2012-07-10 | David Scott Trandal | Methods and systems for inventory management |
US20150032480A1 (en) * | 2013-07-26 | 2015-01-29 | Bank Of America Corporation | Use of e-receipts to determine insurance valuation |
US9836793B2 (en) * | 2009-12-31 | 2017-12-05 | Hartford Fire Insurance Company | Insurance processing system and method using mobile devices for proof of ownership |
US10210577B1 (en) * | 2015-04-17 | 2019-02-19 | State Farm Mutual Automobile Insurance Company | Electronic device data capture for property insurance quotes |
US20190188796A1 (en) * | 2017-12-14 | 2019-06-20 | Mastercard International Incorporated | Personal property inventory captivator systems and methods |
US10521865B1 (en) * | 2015-12-11 | 2019-12-31 | State Farm Mutual Automobile Insurance Company | Structural characteristic extraction and insurance quote generation using 3D images |
US10672080B1 (en) * | 2016-02-12 | 2020-06-02 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced personal property replacement |
US10970549B1 (en) * | 2017-11-14 | 2021-04-06 | Wells Fargo Bank, N.A. | Virtual assistant of safe locker |
US20210192620A1 (en) * | 2019-12-18 | 2021-06-24 | EdenLedger, Inc. d/b/a FanVestor | Machine learning-based digital exchange platform |
US11055531B1 (en) * | 2018-08-24 | 2021-07-06 | United Services Automobiie Association (USAA) | Augmented reality method for repairing damage or replacing physical objects |
US20210279957A1 (en) * | 2020-03-06 | 2021-09-09 | Yembo, Inc. | Systems and methods for building a virtual representation of a location |
US11182860B2 (en) * | 2018-10-05 | 2021-11-23 | The Toronto-Dominion Bank | System and method for providing photo-based estimation |
US20220101245A1 (en) * | 2020-09-29 | 2022-03-31 | International Business Machines Corporation | Automated computerized identification of assets |
US20220148051A1 (en) * | 2019-02-08 | 2022-05-12 | Independent Flooring Validation Ltd | Machine learning based method of recognising flooring type and providing a cost estimate for flooring replacement |
US20220222752A1 (en) * | 2017-10-16 | 2022-07-14 | Mitchell International, Inc. | Methods for analyzing insurance data and devices thereof |
US11468515B1 (en) * | 2020-02-18 | 2022-10-11 | BlueOwl, LLC | Systems and methods for generating and updating a value of personal possessions of a user for insurance purposes |
US11599941B2 (en) * | 2018-05-06 | 2023-03-07 | Strong Force TX Portfolio 2018, LLC | System and method of a smart contract that automatically restructures debt loan |
US11714532B2 (en) * | 2021-12-15 | 2023-08-01 | Capital One Services, Llc | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
-
2021
- 2021-12-15 US US17/644,428 patent/US11714532B2/en active Active
-
2023
- 2023-06-26 US US18/341,155 patent/US20230333720A1/en active Pending
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8219558B1 (en) * | 2008-04-25 | 2012-07-10 | David Scott Trandal | Methods and systems for inventory management |
US9836793B2 (en) * | 2009-12-31 | 2017-12-05 | Hartford Fire Insurance Company | Insurance processing system and method using mobile devices for proof of ownership |
US20150032480A1 (en) * | 2013-07-26 | 2015-01-29 | Bank Of America Corporation | Use of e-receipts to determine insurance valuation |
US10210577B1 (en) * | 2015-04-17 | 2019-02-19 | State Farm Mutual Automobile Insurance Company | Electronic device data capture for property insurance quotes |
US10521865B1 (en) * | 2015-12-11 | 2019-12-31 | State Farm Mutual Automobile Insurance Company | Structural characteristic extraction and insurance quote generation using 3D images |
US10672080B1 (en) * | 2016-02-12 | 2020-06-02 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced personal property replacement |
US20220222752A1 (en) * | 2017-10-16 | 2022-07-14 | Mitchell International, Inc. | Methods for analyzing insurance data and devices thereof |
US10970549B1 (en) * | 2017-11-14 | 2021-04-06 | Wells Fargo Bank, N.A. | Virtual assistant of safe locker |
US20190188796A1 (en) * | 2017-12-14 | 2019-06-20 | Mastercard International Incorporated | Personal property inventory captivator systems and methods |
US11599941B2 (en) * | 2018-05-06 | 2023-03-07 | Strong Force TX Portfolio 2018, LLC | System and method of a smart contract that automatically restructures debt loan |
US11055531B1 (en) * | 2018-08-24 | 2021-07-06 | United Services Automobiie Association (USAA) | Augmented reality method for repairing damage or replacing physical objects |
US11182860B2 (en) * | 2018-10-05 | 2021-11-23 | The Toronto-Dominion Bank | System and method for providing photo-based estimation |
US20220148051A1 (en) * | 2019-02-08 | 2022-05-12 | Independent Flooring Validation Ltd | Machine learning based method of recognising flooring type and providing a cost estimate for flooring replacement |
US20210192620A1 (en) * | 2019-12-18 | 2021-06-24 | EdenLedger, Inc. d/b/a FanVestor | Machine learning-based digital exchange platform |
US11468515B1 (en) * | 2020-02-18 | 2022-10-11 | BlueOwl, LLC | Systems and methods for generating and updating a value of personal possessions of a user for insurance purposes |
US20210279852A1 (en) * | 2020-03-06 | 2021-09-09 | Yembo, Inc. | Identifying flood damage to an indoor environment using a virtual representation |
US20210279957A1 (en) * | 2020-03-06 | 2021-09-09 | Yembo, Inc. | Systems and methods for building a virtual representation of a location |
US20220101245A1 (en) * | 2020-09-29 | 2022-03-31 | International Business Machines Corporation | Automated computerized identification of assets |
US11714532B2 (en) * | 2021-12-15 | 2023-08-01 | Capital One Services, Llc | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface |
Also Published As
Publication number | Publication date |
---|---|
US11714532B2 (en) | 2023-08-01 |
US20230185436A1 (en) | 2023-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220351543A1 (en) | Image scoring and identification based on facial feature descriptors | |
US9576248B2 (en) | Record linkage sharing using labeled comparison vectors and a machine learning domain classification trainer | |
US11074434B2 (en) | Detection of near-duplicate images in profiles for detection of fake-profile accounts | |
US20230031057A1 (en) | Techniques to automatically update payment information in a compute environment | |
KR101531618B1 (en) | Method and system for comparing images | |
US10430727B1 (en) | Systems and methods for privacy-preserving generation of models for estimating consumer behavior | |
US20210126936A1 (en) | Predicting vulnerabilities affecting assets of an enterprise system | |
CN110431560A (en) | The searching method and device of target person, equipment, program product and medium | |
US11126827B2 (en) | Method and system for image identification | |
US20200160680A1 (en) | Techniques to provide and process video data of automatic teller machine video streams to perform suspicious activity detection | |
US20200320548A1 (en) | Systems and Methods for Estimating Future Behavior of a Consumer | |
US10162879B2 (en) | Label filters for large scale multi-label classification | |
US20230333720A1 (en) | Generating presentation information associated with one or more objects depicted in image data for display via a graphical user interface | |
CN108520045B (en) | Data service response method and device | |
US20240112229A1 (en) | Facilitating responding to multiple product or service reviews associated with multiple sources | |
WO2022135765A1 (en) | Using disentangled learning to train an interpretable deep learning model | |
CN110659807A (en) | Risk user identification method and device based on link | |
US10474688B2 (en) | System and method to recommend a bundle of items based on item/user tagging and co-install graph | |
US8918406B2 (en) | Intelligent analysis queue construction | |
US20230088484A1 (en) | Artificial Intelligence Assisted Live Sports Data Quality Assurance | |
US11847599B1 (en) | Computing system for automated evaluation of process workflows | |
CA3183463A1 (en) | Systems and methods for generating predictive risk outcomes | |
CN115769194A (en) | Automatic data linking across datasets | |
Higuchi et al. | Learning Context-dependent Personal Preferences for Adaptive Recommendation | |
CN113874860A (en) | Apparatus and method for detecting malware |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CAPITAL ONE SERVICES, LLC, VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TIKOIAN, KATHRYN;MAIMAN, TYLER;ATKINS, PHOEBE;REEL/FRAME:064060/0970 Effective date: 20211214 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |