US20180349687A1 - Workflow creation by image analysis - Google Patents

Workflow creation by image analysis Download PDF

Info

Publication number
US20180349687A1
US20180349687A1 US15/612,005 US201715612005A US2018349687A1 US 20180349687 A1 US20180349687 A1 US 20180349687A1 US 201715612005 A US201715612005 A US 201715612005A US 2018349687 A1 US2018349687 A1 US 2018349687A1
Authority
US
United States
Prior art keywords
images
workflow
contextual
instructions
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/612,005
Inventor
James E. Bostick
Danny Y. CHEN
Sarbajit K. Rakshit
Keith R. Walker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyndryl Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US15/612,005 priority Critical patent/US20180349687A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WALKER, KEITH R., RAKSHIT, SARBAJIT K., BOSTICK, JAMES E., CHEN, DANNY Y.
Publication of US20180349687A1 publication Critical patent/US20180349687A1/en
Priority to US16/402,372 priority patent/US10699105B2/en
Assigned to KYNDRYL, INC. reassignment KYNDRYL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00335
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • G06K2209/27
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata

Definitions

  • the present invention relates generally to the field of image analysis, and more specifically to analyzing images to create workflows.
  • a workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systemic organization of resources into processes that transform materials, provide services, or process information.
  • a workflow is a sequence of activities to be performed to complete a task.
  • Each activity has a defined duration, actor, guideline, and other such details.
  • the activities may also have parameters that indicate locations, timeframes, and other contextual parameters that impose limitations on the occurrence of said activities.
  • Workflows can provide instructions for task completion to an individual who is unfamiliar with either the task itself or a context in which the task must be completed.
  • a computer implemented method for generating contextual workflows includes receiving a plurality of images, analyzing the received plurality of images to identify one or more activities corresponding to each image and one or more contextual details corresponding to each image, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for each of the images, calculating a confidence weighting for the one or more identified activities for the plurality of images, creating a contextual workflow according to the calculated confidence weightings, receiving a query for a workflow corresponding to an indicated context, and identifying one or more workflows that matches the indicated context.
  • a computer program product and a computer system corresponding to the method are also disclosed.
  • FIG. 1 is a functional block diagram depicting one example of a contextual workflow creation system in accordance with one embodiment of the present invention
  • FIG. 2 is a flowchart depicting a workflow creation method in accordance with at least one embodiment of the present invention
  • FIG. 3A depicts an example set of images in accordance with one embodiment of the present invention
  • FIG. 3B depicts an example table indicating the contextual details to which each cluster of images from FIG. 3A corresponds;
  • FIG. 4 depicts an example created workflow in accordance with one embodiment of the present invention.
  • FIG. 5 depicts a block diagram of components of a computer, in accordance with some embodiments of the present invention.
  • prior research can provide some information, such research may not take into consideration existing contextual details, such as a time of day, time of year, nearby objects, or events. Furthermore, prior research may be infeasible with limited time, especially if one is put into a situation unexpectedly or without prior planning.
  • the present invention analyzes images that correspond to one or more contextual details related to a device's current conditions to provide one or more workflows indicating suggested actions based on the activities displayed in the images.
  • the present invention analyzes images taken at the current time of day of the subway platform the person is standing at, and identifies that people who are standing in certain locations are in said locations before and after the arrival of the train. In other words, these people were unable to board, perhaps because it is a particularly busy time for this platform and they were standing too far from the doors.
  • a workflow is therefore created indicating that the user needs to stand in a certain location to board the train (next to a pillar, away from a stairwell, etc.).
  • FIG. 1 is a functional block diagram depicting one example of a contextual workflow creation system 100 in accordance with one embodiment of the present invention.
  • contextual workflow creation system 100 includes computing systems 110 and network 130 .
  • Contextual workflow creation system 100 enables workflows to be created and identified based on a set of one or more contextual details.
  • Computing systems 110 can be desktop computers, laptop computers, specialized computer servers, or any other computer systems known in the art. In some embodiments, computing systems 110 represent computer systems utilizing clustered computers and components to act as a single pool of seamless resources. In general, computing systems 110 are representative of any electronic devices, or combinations of electronic devices, capable of executing machine-readable program instructions, as described in greater detail with regard to FIG. 5 .
  • computing system 110 B includes a context detection application 116 .
  • Context detection application 116 may be configured to detect one or more current contextual conditions corresponding to computing system 110 B.
  • Example contextual conditions may include a current time, location, or weather conditions detected by computing system 110 B, or a current event indicated by a calendar service or social networking platform available via computing system 110 B.
  • context detection application 116 is further configured to capture, store, and provide images to computing system 110 A via network 130 .
  • computing system 110 A includes a workload creation application 112 .
  • Workload creation application 112 may be configured to receive images as well as current contextual conditions corresponding to computing system 110 B.
  • workload creation application 112 is configured to execute a workload creation method.
  • One example of a suitable workload creation method is described in further detail with respect to FIG. 2 .
  • Network 130 can be, for example, a local area network (LAN), a wide area network (WAN) such as the Internet, or a combination of the two, and include wired, wireless, or fiber optic connections.
  • network 130 can be any combination of connections and protocols that will support communications between computing system 110 A and 110 B in accordance with an embodiment of the present invention.
  • network 130 transmits contextual details and identified workflows between computing system 110 A and 110 B.
  • FIG. 2 is a flowchart depicting a workflow creation method 200 in accordance with at least one embodiment of the present invention.
  • workflow creation method 200 includes identifying ( 210 ) one or more stored images, identifying ( 220 ) one or more images and one or more contextual details corresponding to the one or more images, calculating ( 230 ) a confidence weighting for the one or more identified activities, creating ( 240 ) a context workflow according to the confidence weightings, receiving ( 250 ) a query for a workflow corresponding to an indicated context, identifying ( 260 ) one or more workflows that matches the indicated context, and transmitting ( 270 ) the at least one matched context workflow.
  • Workflow creation method 200 enables the creation and identification of appropriate workflows according to one or more contextual details.
  • Identifying ( 210 ) one or more images of interest may include receiving one or more images from a user device.
  • the user device provides one or more stored images from its own local storage.
  • the user device provides the one or more stored images by providing location details for the one or more stored images corresponding to an image hosting platform, photo sharing service, social network platform, or other platform on which photos are stored or shared.
  • the one or more stored images may correspond to photos or video content from which still images can be extracted.
  • Identifying ( 210 ) one or more stored images may further include receiving a set of images as well as an indicator indicating one or more stored images of interest.
  • the indicator may indicate specific images to be analyzed, or may indicate image formats or other image details used to identify the one or more stored images of interest. For example, the indicator may indicate that only JPEG files at a particular storage location are to be analyzed.
  • Detecting ( 220 ) one or more contextual details corresponding to the one or more images may include analyzing each of the one or more images to identify one or more contextual details corresponding to each image.
  • each image of the one or more images of interest may be analyzed to identify any of a time context, a location context, an object context, or an event context.
  • a time context corresponds to the date and time at which the image was created.
  • An object context corresponds to one or more objects that are detected within an image.
  • Existing object recognition techniques may be used to identify any objects that appear in an image.
  • Utilized object recognition techniques may include, but are not limited to, appearance-based methods (such as edge matching or greyscale matching), feature based methods (such as interpretation trees or pose clustering), gradient histograms, and template matching.
  • an indicator provides a maximum number of objects to be analyzed or identified in each image to minimize processing time. For example, an image can contain hundreds of objects, but an indicator may indicate that only the ten most prominent objects are to be analyzed and considered to provide an object context. In such cases, the provided object context is effectively a list of the ten most prominent objects in each image.
  • a location context corresponds to a location at which the image was created.
  • the location context corresponds to GPS information available via a device taking a photo at the time the photo was taken.
  • the location context corresponds to a location pictured in an image. For example, consider an image taken at coordinates (X,Y) facing north that displays a barn in the image. A second image taken at the same coordinates (X,Y) facing south does not display the barn, and therefore may not be considered to be the same location as the first image because of the contents of the image.
  • the object recognition techniques discussed previously may also be used to provide location contexts of this nature.
  • An event context is an event to which an image corresponds, and may be determined according to a number of factors.
  • an event context may be determined by analyzing the time context, the object context, and the location context, along with other available information. For example, consider a set of images taken on April 10 th between 2 PM and 6 PM. The image locations all correspond to a restaurant, and the object identification results from the image analysis reveal that the most prominently featured objects in the set of images are a birthday cake, a banner reading “Happy Birthday,” a pile of gifts on a table, and a group of individuals. Processing calendar information provided by a user indicates that April 10 th is individual A's birthday.
  • an event context for the set of images, indicating that they correspond to a birthday party for individual A.
  • an event context may be identified according to an image's source location, such as the website it is from or the title of an album the photo is in.
  • Calculating ( 230 ) a confidence weighting for the one or more identified activities may include determining a sequence corresponding to the images.
  • calculating ( 230 ) a confidence weighting includes using existing image analysis techniques identifying one or more objects, activities, or locations depicted in each of the images. A confidence weighting corresponding to the likely sequencing of the identified objects or activities is then calculated.
  • the confidence weighting is based on available timestamp information corresponding to each of the images depicting an activity or object. Because all of the images may not suggest the same sequence of activities, the confidence weightings indicate how many (or what percentage) of the images suggest or indicate a particular sequence for the activities.
  • the confidence weighting for an activity may additionally be based on how frequently said activity appears in an image cluster.
  • a lower selected threshold “X” (wherein X is a percentage or a ratio) may be implemented to define that any activities that appear less frequently than the selected threshold “X” are to be excluded from a created workflow.
  • a selected optional threshold range may be implemented to define that any activity for which a corresponding appearance frequency falls within the selected optional threshold range will be included as an optional step in a created workflow.
  • the clusters include images of people standing in a ticket queue, five of the clusters include images of people entering a gate, all seven clusters include images of people on a ride, four of the clusters include images of groups of people posing, and one of the clusters includes an image of a bird sitting on a tree branch.
  • the confidence weighting may be calculated according to what percentage of the clusters include an image of each activity. Based on these weightings, it may be determined that taking a group photo is an optional step since just over half of the image clusters include such an image, and the image of the bird (and an identified corresponding activity such as “birdwatching”) may be disregarded when creating a workflow because its confidence weighting is below the selected threshold.
  • Creating ( 240 ) a context workflow according to the confidence weightings may include analyzing the calculated confidence weightings for the one or more identified activities.
  • the context workflow may be created according to the most likely sequence of activities as indicated by the calculated confidence weightings.
  • creating the workflows further includes saving the created workflow as a workflow file.
  • the workflow file may be stored in a workflow database.
  • the workflow file may contain the workflow itself, as well as one or more data fields indicating contextual details corresponding to the created workflow. For example, the data fields may indicate that the workflow corresponds to locations X and Y and event Z. These data fields enable simplified query execution by enabling the contextual details of each workflow to be quickly identified in response to a query.
  • Receiving ( 250 ) a query for a workflow corresponding to an indicated context may include receiving a user-initiated query for a workflow corresponding to a set of indicated contextual details.
  • the received query may include details corresponding to any number of available contexts.
  • the query may request workflows that include a specific location, or may request workflows that include a specific location, a specific time, and a specific event context.
  • the received query may indicate further limitation requirements for the workflows.
  • a query may request a workflow that begins or ends at a specific location.
  • the query may impose sequence limitations on the workflows in addition to contextual limitations.
  • the received query corresponds to a device's active contextual details.
  • the query is executed to identify workflows corresponding to the device's current conditions, such as its current location and the current time.
  • Identifying ( 260 ) one or more workflows that matches the indicated context may include searching one or more workflow files to identify one or more workflows that match the limitations imposed by the query.
  • identifying ( 260 ) one or more workflows that matches the indicated context includes analyzing the context data fields of one or more workflow files to identify any files that contain a workflow corresponding to the indicated context. For example, if a query for workflows corresponding to location X is executed, application 112 may analyze each workflow file to determine whether said workflow file's context data fields indicate that the workflow corresponds to location X at any point.
  • Transmitting ( 270 ) the at least one matched context workflow may include providing the identified one or more workflows to a device from which the query was received. In one embodiment, only workflows that match all contexts indicated by the received query are transmitted. In other embodiments, workflows that match all contexts are transmitted when available, but workflows that match the most indicated contexts may be provided in the absence of a workflow that meets all the requirements.
  • FIG. 3A depicts an example set of images 300 in accordance with one embodiment of the present invention.
  • the set of images 300 is divided into four image clusters 310 , wherein each cluster comprises a plurality of images (depicted by shaded squares, wherein the shading indicates which cluster each image belongs to).
  • the images are positioned according to the location on which the image is focused.
  • the clusters are then created according to shared locations of focus in the images. As depicted, some of the images are not included in a cluster, as these images are not directed towards one of the locations corresponding to the four clusters.
  • FIG. 3B depicts an example table 320 indicating the contextual details to which each cluster of images 310 from FIG. 3A corresponds.
  • cluster 310 A corresponds to images directed towards location A, taken at 18:00, and the images include people removing their shoes.
  • Cluster 310 B corresponds to location B, taken at 18:15, and the images include people standing in a queue.
  • Cluster 310 C corresponds to location C, taken at 18:30, and the images include people washing their feet under a water tap.
  • Cluster 310 D corresponds to location D, taken at 18:40, and the images include people eating a meal at a group of tables.
  • the time indicated in the table corresponds to an average capture time for all of the images in the cluster. This average capture time is used to indicate a sequence in which the events indicated by the images typically occur. In another embodiment, the range of existing capture times may be used to determine sequencing as opposed to the average capture time.
  • FIG. 4 depicts an example created workflow 400 in accordance with one embodiment of the present invention.
  • Workflow 400 corresponds to the set of images depicted and discussed with respect to FIG. 3A and FIG. 3B .
  • workflow 400 includes removing ( 410 ) shoes at location A between 17:50 and 18:10, standing ( 420 ) in line at location B between 18:05 and 18:30, washing ( 430 ) feet under a tap at location C between 18:20 and 18:50, and eating ( 440 ) a meal at a group of tables at location D between 18:30 and 19:00.
  • the timestamp guidelines are created according to the range of times in which images corresponding to each activity were captured.
  • FIG. 5 depicts a block diagram of components of computer 500 in accordance with an illustrative embodiment of the present invention. It should be appreciated that FIG. 5 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made.
  • the computer 500 includes communications fabric 502 , which provides communications between computer processor(s) 504 , memory 506 , persistent storage 508 , communications unit 512 , and input/output (I/O) interface(s) 514 .
  • Communications fabric 502 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system.
  • processors such as microprocessors, communications and network processors, etc.
  • Communications fabric 502 can be implemented with one or more buses.
  • Memory 506 and persistent storage 508 are computer-readable storage media.
  • memory 506 includes random access memory (RAM) 516 and cache memory 518 .
  • RAM random access memory
  • cache memory 518 In general, memory 506 can include any suitable volatile or non-volatile computer-readable storage media.
  • persistent storage 508 includes a magnetic hard disk drive.
  • persistent storage 508 can include a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information.
  • the media used by persistent storage 508 may also be removable.
  • a removable hard drive may be used for persistent storage 508 .
  • Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer-readable storage medium that is also part of persistent storage 508 .
  • Communications unit 512 in these examples, provides for communications with other data processing systems or devices.
  • communications unit 512 includes one or more network interface cards.
  • Communications unit 512 may provide communications through the use of either or both physical and wireless communications links.
  • I/O interface(s) 514 allows for input and output of data with other devices that may be connected to computer 500 .
  • I/O interface 514 may provide a connection to external devices 520 such as a keyboard, keypad, a touch screen, and/or some other suitable input device.
  • External devices 520 can also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards.
  • Software and data used to practice embodiments of the present invention can be stored on such portable computer-readable storage media and can be loaded onto persistent storage 508 via I/O interface(s) 514 .
  • I/O interface(s) 514 also connect to a display 522 .
  • Display 522 provides a mechanism to display data to a user and may be, for example, a computer monitor.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A computer implemented method for generating contextual workflows includes receiving a plurality of images, analyzing the received plurality of images to identify one or more activities corresponding to the plurality of images and one or more contextual details corresponding to the plurality of images, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for the plurality of images, calculating a confidence weighting for the one or more identified activities for the plurality of images, creating a contextual workflow according to the calculated confidence weightings, receiving a query for a workflow corresponding to an indicated context, and identifying one or more workflows that matches the indicated context.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates generally to the field of image analysis, and more specifically to analyzing images to create workflows.
  • A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systemic organization of resources into processes that transform materials, provide services, or process information. In a broader sense, a workflow is a sequence of activities to be performed to complete a task. Each activity has a defined duration, actor, guideline, and other such details. The activities may also have parameters that indicate locations, timeframes, and other contextual parameters that impose limitations on the occurrence of said activities. Workflows can provide instructions for task completion to an individual who is unfamiliar with either the task itself or a context in which the task must be completed.
  • SUMMARY
  • As disclosed herein, a computer implemented method for generating contextual workflows includes receiving a plurality of images, analyzing the received plurality of images to identify one or more activities corresponding to each image and one or more contextual details corresponding to each image, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for each of the images, calculating a confidence weighting for the one or more identified activities for the plurality of images, creating a contextual workflow according to the calculated confidence weightings, receiving a query for a workflow corresponding to an indicated context, and identifying one or more workflows that matches the indicated context. A computer program product and a computer system corresponding to the method are also disclosed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a functional block diagram depicting one example of a contextual workflow creation system in accordance with one embodiment of the present invention;
  • FIG. 2 is a flowchart depicting a workflow creation method in accordance with at least one embodiment of the present invention;
  • FIG. 3A depicts an example set of images in accordance with one embodiment of the present invention
  • FIG. 3B depicts an example table indicating the contextual details to which each cluster of images from FIG. 3A corresponds;
  • FIG. 4 depicts an example created workflow in accordance with one embodiment of the present invention; and
  • FIG. 5 depicts a block diagram of components of a computer, in accordance with some embodiments of the present invention.
  • DETAILED DESCRIPTION
  • When engaging in unfamiliar activities, a person may not know the correct sequence to take in completing said activities, if there is one. For example, a person who has traveled to an unfamiliar location may be unaware of a customary sequence of events taken with respect to a particular activity. While in some cases prior research can provide some information, such research may not take into consideration existing contextual details, such as a time of day, time of year, nearby objects, or events. Furthermore, prior research may be infeasible with limited time, especially if one is put into a situation unexpectedly or without prior planning.
  • For example, if visiting Boston and trying to take the subway, a person may not know that at certain times of day at some platforms, he must stand in specific spots or else they may miss the train or must rush to try to board. The present invention analyzes images that correspond to one or more contextual details related to a device's current conditions to provide one or more workflows indicating suggested actions based on the activities displayed in the images. In the above example, the present invention analyzes images taken at the current time of day of the subway platform the person is standing at, and identifies that people who are standing in certain locations are in said locations before and after the arrival of the train. In other words, these people were unable to board, perhaps because it is a particularly busy time for this platform and they were standing too far from the doors. A workflow is therefore created indicating that the user needs to stand in a certain location to board the train (next to a pillar, away from a stairwell, etc.).
  • The present invention will now be described in detail with reference to the Figures. Implementation of embodiments of the invention may take a variety of forms, and exemplary implementation details are discussed subsequently with reference to the Figures.
  • FIG. 1 is a functional block diagram depicting one example of a contextual workflow creation system 100 in accordance with one embodiment of the present invention. As depicted, contextual workflow creation system 100 includes computing systems 110 and network 130. Contextual workflow creation system 100 enables workflows to be created and identified based on a set of one or more contextual details.
  • Computing systems 110 can be desktop computers, laptop computers, specialized computer servers, or any other computer systems known in the art. In some embodiments, computing systems 110 represent computer systems utilizing clustered computers and components to act as a single pool of seamless resources. In general, computing systems 110 are representative of any electronic devices, or combinations of electronic devices, capable of executing machine-readable program instructions, as described in greater detail with regard to FIG. 5.
  • As depicted, computing system 110B includes a context detection application 116. Context detection application 116 may be configured to detect one or more current contextual conditions corresponding to computing system 110B. Example contextual conditions may include a current time, location, or weather conditions detected by computing system 110B, or a current event indicated by a calendar service or social networking platform available via computing system 110B. In some embodiments, context detection application 116 is further configured to capture, store, and provide images to computing system 110A via network 130.
  • As depicted, computing system 110A includes a workload creation application 112. Workload creation application 112 may be configured to receive images as well as current contextual conditions corresponding to computing system 110B. In some embodiments, workload creation application 112 is configured to execute a workload creation method. One example of a suitable workload creation method is described in further detail with respect to FIG. 2.
  • Network 130 can be, for example, a local area network (LAN), a wide area network (WAN) such as the Internet, or a combination of the two, and include wired, wireless, or fiber optic connections. In general, network 130 can be any combination of connections and protocols that will support communications between computing system 110A and 110B in accordance with an embodiment of the present invention. In at least one embodiment of the present invention, network 130 transmits contextual details and identified workflows between computing system 110A and 110B.
  • FIG. 2 is a flowchart depicting a workflow creation method 200 in accordance with at least one embodiment of the present invention. As depicted, workflow creation method 200 includes identifying (210) one or more stored images, identifying (220) one or more images and one or more contextual details corresponding to the one or more images, calculating (230) a confidence weighting for the one or more identified activities, creating (240) a context workflow according to the confidence weightings, receiving (250) a query for a workflow corresponding to an indicated context, identifying (260) one or more workflows that matches the indicated context, and transmitting (270) the at least one matched context workflow. Workflow creation method 200 enables the creation and identification of appropriate workflows according to one or more contextual details.
  • Identifying (210) one or more images of interest may include receiving one or more images from a user device. In some embodiments, the user device provides one or more stored images from its own local storage. In other embodiments, the user device provides the one or more stored images by providing location details for the one or more stored images corresponding to an image hosting platform, photo sharing service, social network platform, or other platform on which photos are stored or shared. The one or more stored images may correspond to photos or video content from which still images can be extracted. Identifying (210) one or more stored images may further include receiving a set of images as well as an indicator indicating one or more stored images of interest. The indicator may indicate specific images to be analyzed, or may indicate image formats or other image details used to identify the one or more stored images of interest. For example, the indicator may indicate that only JPEG files at a particular storage location are to be analyzed.
  • Detecting (220) one or more contextual details corresponding to the one or more images may include analyzing each of the one or more images to identify one or more contextual details corresponding to each image. In some embodiments, each image of the one or more images of interest may be analyzed to identify any of a time context, a location context, an object context, or an event context. A time context corresponds to the date and time at which the image was created.
  • An object context corresponds to one or more objects that are detected within an image. Existing object recognition techniques may be used to identify any objects that appear in an image. Utilized object recognition techniques may include, but are not limited to, appearance-based methods (such as edge matching or greyscale matching), feature based methods (such as interpretation trees or pose clustering), gradient histograms, and template matching. In at least one embodiment, an indicator provides a maximum number of objects to be analyzed or identified in each image to minimize processing time. For example, an image can contain hundreds of objects, but an indicator may indicate that only the ten most prominent objects are to be analyzed and considered to provide an object context. In such cases, the provided object context is effectively a list of the ten most prominent objects in each image.
  • A location context corresponds to a location at which the image was created. In some embodiments, the location context corresponds to GPS information available via a device taking a photo at the time the photo was taken. In other embodiments, the location context corresponds to a location pictured in an image. For example, consider an image taken at coordinates (X,Y) facing north that displays a barn in the image. A second image taken at the same coordinates (X,Y) facing south does not display the barn, and therefore may not be considered to be the same location as the first image because of the contents of the image. The object recognition techniques discussed previously may also be used to provide location contexts of this nature.
  • An event context is an event to which an image corresponds, and may be determined according to a number of factors. In at least some embodiments, an event context may be determined by analyzing the time context, the object context, and the location context, along with other available information. For example, consider a set of images taken on April 10th between 2 PM and 6 PM. The image locations all correspond to a restaurant, and the object identification results from the image analysis reveal that the most prominently featured objects in the set of images are a birthday cake, a banner reading “Happy Birthday,” a pile of gifts on a table, and a group of individuals. Processing calendar information provided by a user indicates that April 10th is individual A's birthday. Aggregating all of this information would provide an event context for the set of images, indicating that they correspond to a birthday party for individual A. In other embodiments, an event context may be identified according to an image's source location, such as the website it is from or the title of an album the photo is in.
  • Calculating (230) a confidence weighting for the one or more identified activities may include determining a sequence corresponding to the images. In at least one embodiment, calculating (230) a confidence weighting includes using existing image analysis techniques identifying one or more objects, activities, or locations depicted in each of the images. A confidence weighting corresponding to the likely sequencing of the identified objects or activities is then calculated. In one embodiment, the confidence weighting is based on available timestamp information corresponding to each of the images depicting an activity or object. Because all of the images may not suggest the same sequence of activities, the confidence weightings indicate how many (or what percentage) of the images suggest or indicate a particular sequence for the activities.
  • The confidence weighting for an activity may additionally be based on how frequently said activity appears in an image cluster. In such embodiments, a lower selected threshold “X” (wherein X is a percentage or a ratio) may be implemented to define that any activities that appear less frequently than the selected threshold “X” are to be excluded from a created workflow. Additionally, a selected optional threshold range may be implemented to define that any activity for which a corresponding appearance frequency falls within the selected optional threshold range will be included as an optional step in a created workflow. Consider an example where a set of images includes seven image clusters related to attending a theme park, and where a selected lower threshold is 0.5, and a selected optional threshold range is (0.5, 0.6). Six of the clusters include images of people standing in a ticket queue, five of the clusters include images of people entering a gate, all seven clusters include images of people on a ride, four of the clusters include images of groups of people posing, and one of the clusters includes an image of a bird sitting on a tree branch. In such an example, the confidence weighting may be calculated according to what percentage of the clusters include an image of each activity. Based on these weightings, it may be determined that taking a group photo is an optional step since just over half of the image clusters include such an image, and the image of the bird (and an identified corresponding activity such as “birdwatching”) may be disregarded when creating a workflow because its confidence weighting is below the selected threshold.
  • Creating (240) a context workflow according to the confidence weightings may include analyzing the calculated confidence weightings for the one or more identified activities. The context workflow may be created according to the most likely sequence of activities as indicated by the calculated confidence weightings. In one embodiment, creating the workflows further includes saving the created workflow as a workflow file. In such an embodiment, the workflow file may be stored in a workflow database. The workflow file may contain the workflow itself, as well as one or more data fields indicating contextual details corresponding to the created workflow. For example, the data fields may indicate that the workflow corresponds to locations X and Y and event Z. These data fields enable simplified query execution by enabling the contextual details of each workflow to be quickly identified in response to a query.
  • Receiving (250) a query for a workflow corresponding to an indicated context may include receiving a user-initiated query for a workflow corresponding to a set of indicated contextual details. The received query may include details corresponding to any number of available contexts. For example, the query may request workflows that include a specific location, or may request workflows that include a specific location, a specific time, and a specific event context. Additionally, the received query may indicate further limitation requirements for the workflows. For example, a query may request a workflow that begins or ends at a specific location. In other words, the query may impose sequence limitations on the workflows in addition to contextual limitations. In at least one embodiment, the received query corresponds to a device's active contextual details. In other words, the query is executed to identify workflows corresponding to the device's current conditions, such as its current location and the current time.
  • Identifying (260) one or more workflows that matches the indicated context may include searching one or more workflow files to identify one or more workflows that match the limitations imposed by the query. In one embodiment, identifying (260) one or more workflows that matches the indicated context includes analyzing the context data fields of one or more workflow files to identify any files that contain a workflow corresponding to the indicated context. For example, if a query for workflows corresponding to location X is executed, application 112 may analyze each workflow file to determine whether said workflow file's context data fields indicate that the workflow corresponds to location X at any point.
  • Transmitting (270) the at least one matched context workflow may include providing the identified one or more workflows to a device from which the query was received. In one embodiment, only workflows that match all contexts indicated by the received query are transmitted. In other embodiments, workflows that match all contexts are transmitted when available, but workflows that match the most indicated contexts may be provided in the absence of a workflow that meets all the requirements.
  • FIG. 3A depicts an example set of images 300 in accordance with one embodiment of the present invention. As depicted, the set of images 300 is divided into four image clusters 310, wherein each cluster comprises a plurality of images (depicted by shaded squares, wherein the shading indicates which cluster each image belongs to). The images are positioned according to the location on which the image is focused. The clusters are then created according to shared locations of focus in the images. As depicted, some of the images are not included in a cluster, as these images are not directed towards one of the locations corresponding to the four clusters.
  • FIG. 3B depicts an example table 320 indicating the contextual details to which each cluster of images 310 from FIG. 3A corresponds. As depicted, cluster 310A corresponds to images directed towards location A, taken at 18:00, and the images include people removing their shoes. Cluster 310B corresponds to location B, taken at 18:15, and the images include people standing in a queue. Cluster 310C corresponds to location C, taken at 18:30, and the images include people washing their feet under a water tap. Cluster 310D corresponds to location D, taken at 18:40, and the images include people eating a meal at a group of tables. In this example, the time indicated in the table corresponds to an average capture time for all of the images in the cluster. This average capture time is used to indicate a sequence in which the events indicated by the images typically occur. In another embodiment, the range of existing capture times may be used to determine sequencing as opposed to the average capture time.
  • FIG. 4 depicts an example created workflow 400 in accordance with one embodiment of the present invention. Workflow 400 corresponds to the set of images depicted and discussed with respect to FIG. 3A and FIG. 3B. As depicted, workflow 400 includes removing (410) shoes at location A between 17:50 and 18:10, standing (420) in line at location B between 18:05 and 18:30, washing (430) feet under a tap at location C between 18:20 and 18:50, and eating (440) a meal at a group of tables at location D between 18:30 and 19:00. In this embodiment, the timestamp guidelines are created according to the range of times in which images corresponding to each activity were captured.
  • FIG. 5 depicts a block diagram of components of computer 500 in accordance with an illustrative embodiment of the present invention. It should be appreciated that FIG. 5 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made.
  • As depicted, the computer 500 includes communications fabric 502, which provides communications between computer processor(s) 504, memory 506, persistent storage 508, communications unit 512, and input/output (I/O) interface(s) 514. Communications fabric 502 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, communications fabric 502 can be implemented with one or more buses.
  • Memory 506 and persistent storage 508 are computer-readable storage media. In this embodiment, memory 506 includes random access memory (RAM) 516 and cache memory 518. In general, memory 506 can include any suitable volatile or non-volatile computer-readable storage media.
  • One or more programs may be stored in persistent storage 508 for access and/or execution by one or more of the respective computer processors 504 via one or more memories of memory 506. In this embodiment, persistent storage 508 includes a magnetic hard disk drive. Alternatively, or in addition to a magnetic hard disk drive, persistent storage 508 can include a solid state hard drive, a semiconductor storage device, read-only memory (ROM), erasable programmable read-only memory (EPROM), flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information.
  • The media used by persistent storage 508 may also be removable. For example, a removable hard drive may be used for persistent storage 508. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer-readable storage medium that is also part of persistent storage 508.
  • Communications unit 512, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 512 includes one or more network interface cards. Communications unit 512 may provide communications through the use of either or both physical and wireless communications links.
  • I/O interface(s) 514 allows for input and output of data with other devices that may be connected to computer 500. For example, I/O interface 514 may provide a connection to external devices 520 such as a keyboard, keypad, a touch screen, and/or some other suitable input device. External devices 520 can also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data used to practice embodiments of the present invention can be stored on such portable computer-readable storage media and can be loaded onto persistent storage 508 via I/O interface(s) 514. I/O interface(s) 514 also connect to a display 522.
  • Display 522 provides a mechanism to display data to a user and may be, for example, a computer monitor.
  • The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

What is claimed is:
1. A computer implemented method for generating contextual workflows, the method comprising:
receiving a plurality of images;
analyzing the received plurality of images to identify one or more activities corresponding to the plurality of images and one or more contextual details corresponding to the plurality of images, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for the plurality of images;
calculating a confidence weighting for the one or more identified activities for the plurality of images;
creating a contextual workflow according to the calculated confidence weightings;
receiving a query for a workflow corresponding to an indicated context; and
identifying one or more workflows that matches the indicated context.
2. The computer implemented method of claim 1, wherein calculating a confidence weighting for an activity comprises calculating a percentage of the plurality of images that correspond to said activity.
3. The computer implemented method of claim 1, wherein creating a contextual workflow according to the calculated confidence weightings comprises creating a contextual workflow that includes activities that have a confidence weighting greater than a selected threshold.
4. The computer implemented method of claim 1, further comprising creating an indicator for the created contextual workflow that indicates one or more contexts to which the created contextual workflow corresponds.
5. The computer implemented method of claim 4, wherein identifying one or more workflows that matches the indicated context comprises searching for a workflow that has an indicator that includes the indicated context.
6. The computer implemented method of claim 1, further comprising storing the created contextual workflow in a contextual workflow database.
7. The computer implemented method of claim 1, further comprising identifying one or more optional activities for a workflow, wherein the one or more optional activities correspond to activities for which the calculated confidence weightings fall within a selected optional threshold range.
8. A computer program product for generating contextual workflows, the computer program product comprising:
one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising instructions to:
receive a plurality of images;
analyze the received plurality of images to identify one or more activities corresponding to the plurality of images and one or more contextual details corresponding to the plurality of images, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for the plurality images;
calculate a confidence weighting for the one or more identified activities for the plurality of images;
create a contextual workflow according to the calculated confidence weightings;
receive a query for a workflow corresponding to an indicated context; and
identify one or more workflows that matches the indicated context.
9. The computer program product of claim 8, wherein instructions to calculate a confidence weighting for an activity comprise instructions to calculate a percentage of the plurality of images that correspond to said activity.
10. The computer program product of claim 8, wherein instructions to create a contextual workflow according to the calculated confidence weightings comprise instructions to create a contextual workflow that includes activities that have a confidence weighting greater than a selected threshold.
11. The computer program product of claim 8, further comprising instructions to create an indicator for the created contextual workflow that indicates one or more contexts to which the created contextual workflow corresponds.
12. The computer program product of claim 11, wherein instructions to identify one or more workflows that matches the indicated context comprise instructions to search for a workflow that has an indicator that includes the indicated context.
13. The computer program product of claim 8, further comprising instructions to store the created contextual workflow in a contextual workflow database.
14. The computer program product of claim 8, further comprising instructions to identify one or more optional activities for a workflow, wherein the one or more optional activities correspond to activities for which the calculated confidence weightings fall within a selected optional threshold range.
15. A computer system for generating natural language processing queries, the computer system comprising:
one or more computer processors;
one or more computer-readable storage media;
program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising instructions to:
receive a plurality of images;
analyze the received plurality of images to identify one or more activities corresponding to the plurality of images and one or more contextual details corresponding to the plurality of images, wherein the contextual details indicate a capture location, a capture time, a capture sequence, or a capture subject for the plurality of images;
calculate a confidence weighting for the one or more identified activities for the plurality of images;
create a contextual workflow according to the calculated confidence weightings;
receive a query for a workflow corresponding to an indicated context; and
identify one or more workflows that matches the indicated context.
16. The computer system of claim 15, wherein instructions to calculate a confidence weighting for an activity comprise instructions to calculate a percentage of the plurality of images that correspond to said activity.
17. The computer system of claim 15, wherein instructions to create a contextual workflow according to the calculated confidence weightings comprise instructions to create a contextual workflow that includes activities that have a confidence weighting greater than a selected threshold.
18. The computer system of claim 15, further comprising instructions to create an indicator for the created contextual workflow that indicates one or more contexts to which the created contextual workflow corresponds.
19. The computer system of claim 18, wherein instructions to identify one or more workflows that matches the indicated context comprise instructions to search for a workflow that has an indicator that includes the indicated context.
20. The computer system of claim 15, further comprising instructions to store the created contextual workflow in a contextual workflow database.
US15/612,005 2017-06-02 2017-06-02 Workflow creation by image analysis Abandoned US20180349687A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/612,005 US20180349687A1 (en) 2017-06-02 2017-06-02 Workflow creation by image analysis
US16/402,372 US10699105B2 (en) 2017-06-02 2019-05-03 Workflow creation by image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/612,005 US20180349687A1 (en) 2017-06-02 2017-06-02 Workflow creation by image analysis

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/402,372 Continuation US10699105B2 (en) 2017-06-02 2019-05-03 Workflow creation by image analysis

Publications (1)

Publication Number Publication Date
US20180349687A1 true US20180349687A1 (en) 2018-12-06

Family

ID=64458290

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/612,005 Abandoned US20180349687A1 (en) 2017-06-02 2017-06-02 Workflow creation by image analysis
US16/402,372 Active US10699105B2 (en) 2017-06-02 2019-05-03 Workflow creation by image analysis

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/402,372 Active US10699105B2 (en) 2017-06-02 2019-05-03 Workflow creation by image analysis

Country Status (1)

Country Link
US (2) US20180349687A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11698927B2 (en) * 2018-05-16 2023-07-11 Sony Interactive Entertainment LLC Contextual digital media processing systems and methods

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070203589A1 (en) * 2005-04-08 2007-08-30 Manyworlds, Inc. Adaptive Recombinant Process Methods
CA2711544A1 (en) * 2007-07-10 2009-01-15 Information In Place, Inc. Method and system for managing enterprise workflow and information
US9015099B2 (en) 2012-08-14 2015-04-21 Sri International Method, system and device for inferring a mobile user's current context and proactively providing assistance
US20140074545A1 (en) 2012-09-07 2014-03-13 Magnet Systems Inc. Human workflow aware recommendation engine
US20140095203A1 (en) 2012-09-28 2014-04-03 Siemens Medical Solutions Usa, Inc. Medical workflow determination and optimization
US20140207506A1 (en) 2013-01-21 2014-07-24 Salesforce.Com, Inc. Computer implemented methods and apparatus for recommending a workflow
US20140337085A1 (en) 2013-05-08 2014-11-13 Xerox Corporation Contextual service workflow recommendation using random walk on social network and service network
US20160328604A1 (en) * 2014-01-07 2016-11-10 Arb Labs Inc. Systems and methods of monitoring activities at a gaming venue
US20150278726A1 (en) * 2014-03-31 2015-10-01 James Gelsin Marx Systems and methods for workflow processing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11698927B2 (en) * 2018-05-16 2023-07-11 Sony Interactive Entertainment LLC Contextual digital media processing systems and methods

Also Published As

Publication number Publication date
US10699105B2 (en) 2020-06-30
US20190347477A1 (en) 2019-11-14

Similar Documents

Publication Publication Date Title
US10346929B2 (en) Computer-vision content detection for connecting objects in media to users
US11205100B2 (en) Edge-based adaptive machine learning for object recognition
US9953217B2 (en) System and method for pose-aware feature learning
US10262290B1 (en) Utilizing machine learning to generate augmented reality delivery instructions for delivering an item to a location
US9778817B2 (en) Tagging of images based on social network tags or comments
US9436682B2 (en) Techniques for machine language translation of text from an image based on non-textual context information from the image
US20180060705A1 (en) Image text analysis for identifying hidden text
US20170011034A1 (en) Computerized system and method for automatically associating metadata with media objects
US10242034B1 (en) Intelligent selection of images to create image narratives
US20200019812A1 (en) Method and system for producing story video
AU2019253844A1 (en) Interactive design and support of a reference architecture
US10489637B2 (en) Method and device for obtaining similar face images and face image information
US10248847B2 (en) Profile information identification
US10423822B2 (en) Video image overlay of an event performance
US20200192918A1 (en) Impression tagging system for locations
US11017304B2 (en) Original idea extraction from written text data using knowledge graphs
CN110209658B (en) Data cleaning method and device
CN108924381B (en) Image processing method, image processing apparatus, and computer readable medium
US10762089B2 (en) Open ended question identification for investigations
US20140067703A1 (en) Talent acquisition system and method
US20160292502A1 (en) Identifying updated content in images
US20220164326A1 (en) Efficient data scraping and deduplication system for registered sex offender queries
US10699105B2 (en) Workflow creation by image analysis
KR102155793B1 (en) Method and apparatus for managing worker's unit price of crowdsourcing based project for artificial intelligence training data generation
CN110032503A (en) Data processing system, method, equipment and device based on UI automation and OCR

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOSTICK, JAMES E.;CHEN, DANNY Y.;RAKSHIT, SARBAJIT K.;AND OTHERS;SIGNING DATES FROM 20170530 TO 20170601;REEL/FRAME:042572/0380

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: KYNDRYL, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:058213/0912

Effective date: 20211118