WO2021247649A1 - Système et traitement de capture d'images - Google Patents

Système et traitement de capture d'images Download PDF

Info

Publication number
WO2021247649A1
WO2021247649A1 PCT/US2021/035369 US2021035369W WO2021247649A1 WO 2021247649 A1 WO2021247649 A1 WO 2021247649A1 US 2021035369 W US2021035369 W US 2021035369W WO 2021247649 A1 WO2021247649 A1 WO 2021247649A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image capture
capture device
track
examples
Prior art date
Application number
PCT/US2021/035369
Other languages
English (en)
Inventor
Joseph Mark JENKINS
L. William Matthew JENKINS
Original Assignee
Iotta, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iotta, Llc filed Critical Iotta, Llc
Publication of WO2021247649A1 publication Critical patent/WO2021247649A1/fr
Priority to US18/061,268 priority Critical patent/US20230114454A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply

Definitions

  • a retail environment can pose particular challenges to flexible implementation of visual data capture devices. Such challenges can result from ever-changing product display or other furniture layouts, a single-use or disposable nature of product displays, unavailability of mains power at a desired location, and other various challenges.
  • a device comprises a dye-sensitized solar panel configured to harvest power from energy emitted as light by an indoor light source, a camera, and a processor coupled to the dye-sensitized solar panel and the camera.
  • the processor can be configured to: receive at least some of the power harvested by the dye-sensitized solar panel, capture an image via the camera, and transmit the image to a computing device.
  • a battery can be coupled to the dye-sensitized solar panel and the processor that can be configured to store at least some of the power harvested by the dye-sensitized solar panel and provide at least some of the stored power to the processor.
  • a system comprises a computing device, an artificial light source, and an image capture device wirelessly communicatively coupled to the computing device.
  • the image capture device can be configured to: harvest power from light emitted by the artificial light source, capture an image via the camera, and transmit the image to the computing device.
  • FIG. 1 is a block diagram of an illustrative system in accordance with aspects of the present disclosure.
  • FIG. 2 is a block diagram of an illustrative image capture device in accordance with aspects of the present disclosure.
  • FIG. 3 is a flowchart of an illustrative device initialization method in accordance with aspects of the present disclosure.
  • FIG. 4 is a flowchart of an illustrative method of image capture and processing in accordance with aspects of the present disclosure.
  • FIG. 5 is a flowchart of an illustrative method of image processing in accordance with aspects of the present disclosure.
  • FIG. 6 is a flowchart of an illustrative method of data management and control in accordance with aspects of the present disclosure.
  • FIG. 7 is an schematic of an exemplary computer system capable of use with the present embodiments.
  • This certain retail space can be referred to as promotional space and can include areas such as end-cap spaces, eye-level space, freestanding spaces (e.g., isle centers, displays near entry or exit doors, etc.), “out of place” spaces (e.g., candy and popcorn displays located near movies instead of only in a food section), near checkout registers, etc.
  • promotional space can include areas such as end-cap spaces, eye-level space, freestanding spaces (e.g., isle centers, displays near entry or exit doors, etc.), “out of place” spaces (e.g., candy and popcorn displays located near movies instead of only in a food section), near checkout registers, etc.
  • this promotional space is compensated such that a vendor having products appear in the promotional space pays a retailer for having that vendor’s products appear in the promotional space.
  • vendors prepare and distribute, or make arrangements with third-parties to prepare and distribute, marketing materials to retailers. These marketing materials can include signage, pop-up or assemblable displays that either appear alongside products or serve as shelving or storage for the products, and other similar marketing materials to bring attention to the vendor’s products being offered by the retailer.
  • the payment for the promotional space, and the design, manufacturing, and distribution of the marketing materials may create spending decisions for the vendors.
  • One way to make or j ustify these vendor spending decisions is through data indicating an expected return on investment for the vendor spending or effectiveness of product placement in the promotional space or of the marketing materials. That data can include information related to expected sales resulting from the vendor spending or sales that have already occurred from ongoing vendor spending that is under consideration for renewal or expansion.
  • the data may indicate demographic information about customers who visit the retailer such as age, gender, race, etc.
  • the data may indicate trends within the retail environment such as brand recognition, level of consumer engagement with a promotional space or product, a number of consumers passing a particular space per unit of time, a consumer traffic pattern within the retail environment, an amount of time that a given consumer engages with the particular space (e.g., a dwell time), etc.
  • this data creates an additional potential revenue stream for retailers.
  • retailers may collect the data and provide the data to vendors at a cost to enable the vendors to have greater insight into where they spend money on promotional spaces or marketing materials.
  • a first vendor (or group of vendors) may collect the data and provide the data to a second vendor at a cost to enable the vendors to have greater insight into where the second vendor may choose to spend money on promotional spaces or marketing materials.
  • a vendor may be considering spending money on promotional space or marketing materials in a particular retail environment for a product primarily of interest to persons age 18 to 23.
  • the vendor may choose to instead spend money on promotional spaces or marketing materials at another retail environment whose consumer base more closely aligns to the vendor’s product. Similar decisions about the value of certain promotional space and/or effectiveness of marketing materials can be made based on criteria set by the vendor. [0017] While collection of the data can be advantageous to the retailer, the collection can also be costly and/or inconvenient. For example, the retailer could employ additional workers to monitor certain promotional spaces or products related to marketing materials and record information regarding consumer demographics and/or engagement either passively through observation or actively through surveying consumers.
  • Another example includes the use of image capture devices to monitor the promotional spaces and/or products related to the marketing materials and computing devices to analyze output of the image capture devices to derive the data.
  • image capture devices to monitor the promotional spaces and/or products related to the marketing materials and computing devices to analyze output of the image capture devices to derive the data.
  • this approach too, faces potential difficulty.
  • some promotional spaces or marketing materials are located in an indoor area that lacks easy access to mains power.
  • at least some promotional space displays or marketing materials are single use in nature and disposable, whereas the image capture devices may not be generally considered single use or disposable.
  • Video cameras can present a number of issues. For example, video cameras are relatively expensive and must be connected to a significant source of power using a wired connection. This type of connection limits the available placement locations while also requiring expertise and an involved installation procedures. Further, the use of video images requires extensive processing power to determine information from the images. For example, even low frame rate video can result in over 2 million frames per day to process, which can require significant communication links and processing power.
  • the image capture device in at least some examples, is referred to as an Internet of Things (IoT) device because the image capture device includes wireless communication connectivity.
  • IoT Internet of Things
  • the image capture device in at least some examples, is powered by a battery that is recharged wirelessly.
  • the wireless charging in at least some examples, is performed via light-sensitive material (such as solar cells or dye-sensitized solar cells) absorbing energy emitted by lights (e.g., either sunlight or an artificial light source such as lightbulbs of any suitable technology) and converting that light into power for recharging the battery and/or powering the image capture device.
  • light-sensitive material such as solar cells or dye-sensitized solar cells
  • lights e.g., either sunlight or an artificial light source such as lightbulbs of any suitable technology
  • the image capture device in at least some examples, captures images at predefined intervals and transmits those images for analysis. In other examples, the image capture device performs pre-processing on the images prior to transmission for analysis.
  • the transmission is, in some examples, to an edge device.
  • the edge device in some examples, performs additional processing and/or analysis of the images according to artificial intelligence algorithms.
  • the edge device further controls the image capture device, such as controlling a rate of image capture and transmission or other characteristics related to power management.
  • the edge device transmits the images and/or data derived from the images to a server (e.g., such as in a cloud computing environment) for further analysis and data presentation to a user.
  • a server e.g., such as in a cloud computing environment
  • the image capture device as described herein can be used to capture a number of images suitable for obtaining the same or similar information as more complex systems.
  • the same or similar demographic information can be obtained from the image capture device as described herein by capturing between 1,000-4,000 images per day as can be obtained using video systems, which may nearly three orders of magnitude less than obtained from video images.
  • capturing one image per minute for an entire day may only result in around 1,440 images.
  • a similar number of images may be captured using a higher capture rate during designated time (e.g., high traffic times), which may allow for similar data as obtained from videos without the need for video cameras, wired power, expensive installations, or time consuming processing of the video feed.
  • the vendor collects the data.
  • the vendor or a third-party engaged by the vendor, may provide the retailer with a display or marketing materials that are at least partially prefabricated and are then assembled on site by the retailer, an agent or representative of the vendor, or a third-party engaged by the vendor.
  • the display or marketing materials include among their components, the image capture device and instructions for affixing the image capture device to the display or marketing materials and activating the image capture device. After activation, the image capture device operates in a manner substantially similar to that described above, providing the images and resulting data to the vendor, or to another third party, without collection by the retailer and without necessitating the vendor obtain the data from the retailer.
  • teachings of the description are applicable to other environments.
  • teachings of the description are applicable to both indoor and outdoor environments, as well as environments other than marketing, including but not limited to security environments.
  • the image processing device can be used in other observed environments, which can include retail environments, but can also include settings such as security, social gatherings (e.g., sporting events, malls, parks, etc.), etc.
  • the image capture device can capture demographic information in addition to other observations such as distance between people (e.g., social distancing monitoring, etc.), thermal images to determine temperatures of people, and the like. The information can be captured and displayed in a dashboard along with the other information described herein.
  • the system 100 includes both a cloud computing component 102 and an observed environment 104, which in some embodiments can be a retail environment but can also include other environments such as social settings, sporting events, security settings, etc.
  • the observed environment 104 is an indoor environment including a light source 106, a display 108, an image capture device 110, and an edge device 112.
  • the image capture device 110 is adapted to interface with the display 108, for example, by being affixed to the display 108 and/or affixed to a position that can view the display or an area around the display 108.
  • the image capture device 110 can be communicatively coupled to the edge device 112, which is communicatively coupled to the cloud computing component 102.
  • the light source 106 and the edge device 112 can be co-located and/or are, or exist in, the same device.
  • the cloud computing component 102 is any computing device capable of performing computer processing.
  • the cloud computing component 102 is a processing device, a computer system, a server, a computing resource, a cloud-computing node, or an artificial intelligence computing system capable of processing received data and/or images.
  • the cloud computing component 102 can be centralized (e.g., a single server) or de centralized (e.g., a plurality of computing nodes).
  • the cloud computing component 102 is capable of performing processing according to artificial intelligence algorithms, for example, to derive or determine demographic or other information from images or data sets.
  • the light source 106 can be any suitable light source that emits energy in the form of light within and/or outside a visible light spectrum.
  • the light source 106 may be a naturally occurring source of light, such as the sun.
  • the light source 106 may be a manufactured source of light, such as a light bulb.
  • the light source 106 is a fluorescent light, an incandescent light, or an LED light.
  • the light source 106 can emit light having a luminance defined in units of lux.
  • the luminance of the light source 106 may range from about 200 lux to about 2000 lux, where an LED light can have a luminance of up to about 2000 lux.
  • the luminance of LED lights is, for at least some implementations of the light source 106, one or more orders of magnitude greater than the luminance of fluorescent or incandescent lights.
  • a wavelength of energy emitting from the light source 106 can be adjustable or tunable according to user input or preference.
  • a wavelength of light emitted by the light source 106 can be adjustable or tunable to modify energy output of the light source 106.
  • the light source 106 is an LED light
  • at least some implementations of the light source 106 have peak or optimal efficiency (e.g., maximum harvestable energy output) at a wavelength of about 450 nanometers.
  • the display 108 can be, in some examples, a product display, a product at a retail location (e.g., on a shelf), and/or marketing materials.
  • the display 108 can be configured to serve as a storage apparatus for products offered for sale.
  • the display 108 can be configured to augment products or services offered for sale, such as being positioned proximate to such products or information regarding such services.
  • the display 108 can be a pop-up display apparatus.
  • the display 108 may be intended as a single use apparatus that is provided to a retailer to construct within the observed environment 104.
  • the display 108 can be provided to the retailer by a vendor whose products or services correspond to visuals of the display 108.
  • the display 108 can be, in some examples, manufactured in a manner such as to be readily assemblable with minimal instruction, investment of time, or investment of effort. Such as focus of manufacturing minimizes a cost associated to the retailer in assembling and deploying the display 108 within the observed environment 104.
  • the image capture device 110 can be, in some examples, a computing platform capable of capturing images, processing the images, transmitting and receiving data, storing power, and wirelessly obtaining power.
  • the image capture device includes a system on a chip (SoC) having a processor, memory, transceiver, antenna, and camera.
  • SoC system on a chip
  • the image capture device further includes, as a component of the SoC or coupled to the SoC, one or more sensors such as light sensors, motion sensors, heat, thermal, or temperature sensors, etc.
  • the SoC can comprise or be coupled to a battery and a wireless power receiver.
  • the wireless power receiver can be one or more photovoltaic panels, one or more magnetic coils, an antenna, etc.
  • the wireless power receiver includes solar cells (e.g., dye sensitized solar cells, etc.) suitable for harvesting energy from the light source 106.
  • the image capture device 110 can obtain or harvest power from the light source 106 to charge the battery.
  • At least some examples of the image capture device 110 are configured to interface with the display 108 to mount or otherwise affix to the display 108.
  • the image capture device 110 may be configured to affix to a back surface of the display 108, affix to a surface of the display 108 that is a part of an inner volume of the display 108, affix to a top of the display 108, and/or affix to any other suitable portion of the display 108.
  • At least some implementations of the image capture device 110 are separable such that the separated components of the image capture device are capable of being affixed to the display 108 in separate areas and coupled together via one or more conductors.
  • a first portion of the image capture device 110 such as the SoC
  • a second portion of the image capture device 110 such as the battery and/or the wireless power receiver, may be affixed to the display 108 in a second location and coupled via one or more conductors to the first portion of the image capture device 110.
  • the system 100 illustrates only one image capture device 110, in at least some implementations the system 100 includes additional or multiple image capture devices 110.
  • the multiple image capture devices 110 can be associated with multiple displays 108.
  • multiple image capture devices 110 can be arranged to capture different angles of a single display 108 and/or product, and/or multiple image capture devices 110 can be arranged at different locations in an environment to capture images for multiple products at a time.
  • the multiple image capture devices 110 can be connected to the same or different edge devices 112.
  • the edge device 112 can be, in some examples, a computing platform or other processing device capable of processing images, transmitting and receiving data via wireless and/or wired communication, and storing data. To process the images, in at least some examples, the edge device 112 is capable of performing processing according to artificial intelligence algorithms, for example, to derive or determine demographic or other information from images or data sets. Some implementations of the edge device 112 are further capable of exerting a measure of control over the image capture device 110, such as controlling when the image capture device 110 captures and/or transmits images, performing power or other management of the image capture device 110, hosting a management or other graphical user or command line interface for the image capture device 110, etc., as described in more detail herein.
  • the edge device 112 functions as a bridge or access point to facilitate communication between the image capture device 110 and the cloud computing component 102.
  • some implementations of the edge device 112 can include cellular communication functionality such that the edge device 112 includes, or is, a long-term evolution (LTE) access point.
  • LTE long-term evolution
  • the inclusion of cellular communication functionality can, in some examples, enable the edge device 112 to communicate with the cloud computing component 102 without requiring, or relying on, a network infrastructure of an environment.
  • the system 100 may include multiple edge devices 112.
  • multiple edge devices 112 may be distributed in the observed environment 104, with each edge device 112 within communication range of at least one other edge device 112.
  • the multiple edge devices 112 may form, or function as, a mesh network.
  • the edge devices 112 may remain active and accessible even if there is no display 108 or image capture device 110 located in, or operating in, the observed environment at a given point in time. This may facilitate at least some of the edge devices 112 being used as optional retail access points for wireless internet connectivity for customers, guests, and/or other eligible (e.g., licensed, registered, etc.) third parties
  • the image capture device 110 can acquires images and transmits those images to the edge device 112.
  • the acquisition and transmission of images may be on a predetermined schedule, on command, or triggered based on the occurrence of a specified event or criterion.
  • the image capture device 110 can acquire images based on an output of a sensor, such as indicating the possible or probable presence of a person in view of the image capture device 110.
  • the image capture device 110 can acquire images based on a predetermined schedule, such as a number of images per second, a number of images per minute, a number of images per hour, a number of images per day, or any other suitable schedule.
  • the predetermined schedule is programmable by accessing a control interface of the image capture device 110 via the edge device 112 or directly via the image capture device 110 itself.
  • the image capture device 110 can acquire images on command, such as upon or responsive to receipt of a command received from the edge device 112 to capture one or more images.
  • the image capture device 110 may be controlled to capture and/or transmit images at an interval that enables the image capture device 110 to operate for approximately a predetermined time for a given energy storage capacity of the battery and/or energy harvesting or reception capacity of the wireless power receiver.
  • the wireless power receiver may be capable of providing the image capture device with a first amount of power over a given time interval.
  • the image capture device 110 may consume a second amount of power to capture an image, process the image to generate a track record, and/or transmit the image to another device.
  • a frequency with which the image capture device 110 captures images, process the images to generate track records, and/or transmit the images to another device may be controlled.
  • the image capture device 110 may be programmed or controlled such that the first amount of power is greater than the second amount of power multiplied by a number of images captured, processed, and/or transmitted.
  • the image capture device 110 is suitable for, and capable of, capturing one image per second for a 12-hour period (e.g., when the light source 106 is an indoor light source other than a LED light).
  • the image capture device 110 is suitable for, and capable of, capturing one image per second for a 24-hour period (e.g., when the light source 106 is a LED light).
  • the image capture device 110 prior to transmitting the images to the edge device 112 the image capture device 110 performs some amount of processing on the images (e.g., referred to as pre-processing when additional processing will later be performed on the images by another device such as the edge device 112 and/or cloud computing component 102). For example, at In some implementations, the image capture device 110 processes the images to attempt to determine on or more characteristics based on the images. The image capture device 110 may process the images to perform facial detection, determine whether a person depicted in the images is viewing an area of interest of the display 108, etc.
  • the image capture device 110 After processing the images, in some examples the image capture device 110 transmits the images to the edge device 112. In other examples, after processing the images the image capture device 110 transmits the images to the cloud computing component 102 or any other suitable computing device. Alternatively, after processing the images, in some examples, the image capture device 110 discards the images and/or does not transmit the images. For example, when the processing indicates that the images do not contain additional data useful for determining metrics relates to the person depicted in the images, the image capture device 110 may not transmit the images.
  • the image capture device 110 can increase the efficiency of its power consumption. For example, by not transmitting images from which no additional useful data can be obtained, the image capture device 110 can preserve the power that would otherwise have been expended in performing the transmissions. This preservation of power extends a usable period of time of the image capture device 110 for a given charge of the battery of the image capture device 110.
  • additional useful data in the above context, is data that will, or may, lead to the identification or determination of characteristics or metrics not otherwise known in the absence of the additional data.
  • one metric for determination is dwell time. If the dwell time is of interest up to a predetermined value, and no other metrics are of interest, images depicting dwell time of the same consumer beyond the predetermined value may not yield additional useful data. Similarly, if consumer gender is of interest, and no other metrics are of interest, images depicting the same consumer after the gender of that consumer has been determined may not yield additional useful data. Thus, by determining that these images do not contain additional useful data and not transmitting them from the image capture device 110, the image capture device reduces power consumption. [0037] As discussed above, in at least some implementations of the system 100, the edge device 112 exerts a measure of control over the image capture device 110.
  • the edge device 112 may transmit one or more commands to the image capture device 110.
  • the commands may instruct the image capture device 110 to perform any suitable function, at least some of which may include specifying a frame rate of the image capture device 110, specifying an image capture rate of the image capture device 110, instructing the image capture device 110 to capture images when motion is detected, specifying a schedule for the image capture device 110 to capture images, specifying a resolution of the captured images, etc.
  • the edge device 112 further transmits an indication to the image capture device 110 of whether the edge device 112 has received sufficient images to determine desired characteristic or metrics. Based on this indication, the image capture device 110 may cease transmitting certain images to the edge device 112, as discussed above.
  • the image capture device 110 and/or the edge device 112 can generate a track record.
  • the track record can include information about the images and / or the image capture device 110 associated with the captured images.
  • the track record can include one or more of an identification of the image capture device 110, status information related to the image capture device 110, an indication of consumer engagement with the image capture device 110 and/or a display 108 to which the image capture device 110 is affixed, a gender of the consumer, an age of the consumer (or age category, such as youth, adult, or senior), an ethnicity of the consumer, a mood of the consumer, and/or objects carried or worn by the consumer.
  • At least some of the data included in the track record can be associated with a level of confidence in an accuracy of the data, where the level of confidence can also be included in the track record.
  • the image capture device 110 can transmit the track record to the edge device 112 and/or the cloud computing component 102.
  • the edge device 112 can generate and/or supplement the track record.
  • the image capture device 110 may transmit the track record to the edge device 112 and/or the cloud computing component 102 without transmitting the image from which the image capture device 110 derived the track record.
  • the edge device 112 receives the images transmitted by the image capture device 110.
  • the images are received according to any suitable short-range or long-range wireless communication protocol supported by both the image capture device 110 and the edge device 112.
  • the edge device 112 transmits the received images to the cloud computing component 102 without processing the received images.
  • the edge device 112 processes the images received from the image capture device 110 prior to transmission to the cloud computing component 102.
  • the processing includes artificial intelligence processing to identify characteristics and/or metrics related to the images or consumers depicted by the images.
  • the edge device 112 may process the images to determine one or more demographic identifiers such as any one or more of a gender of a consumer, an age of the consumer, an ethnicity of the consumer, a level of engagement of the consumer with the display 108, a mood, sentiment, or feeling of the consumer, and/or information regarding accessories worn or carried by the consumer.
  • the level of engagement of the consumer can include determining whether or not the consumer is facing the image capture device 110 and/or the display 108, a proximity of the consumer to the image capture device 110 and/or the display 108, and/or a dwell time of the consumer in front of, in view of, or in a certain position in relation to the image capture device 110 and/or the display 108.
  • the edge device 112 may further process the images to determine any other suitable characteristics or metrics based on contents of the images.
  • the edge device 112 can receive the images in a time spaced manner that does not approximate video.
  • the edge device 112 may process the images to approximate a video stream based on the images.
  • the edge device may perform statistical sampling and/or artificial intelligence processing to approximate a video stream based on the images received from the image capture device 110.
  • the edge device 112 also receives a track record from the image capture device 110, where a track record is received for, and corresponds to, each received image. In other implementations of the system 100, the edge device 112 receives a track record from the image capture device 110 without receiving an image.
  • the edge device 112 may modify the track record based on processing of the image by the edge device 112.
  • the edge device 112 may include information about the image capture device 110, as well as results of at least some pre processing performed by the image capture device 110.
  • the edge device 112 updates the track record associated with the image to reflect results of the processing of the image in the track record of the image.
  • the edge device 112 may receive track records from multiple image capture devices 110 within a same general location, such as in a same store, a same structure, etc., and/or receive multiple track records from a single image capture device 110.
  • the edge device 112 in at least some examples, generates a track file that collates the data from the various received track records.
  • the edge device 112 transmits images, track records, and or a track file to the cloud computing component 102. In some examples this transmission is performed on a scheduled basis such that the transmission comprises a batch of images and/or track records. In other examples, the transmission is performed for each image and/or track record received by the edge device 112 with the track file being transmitted separately on a scheduled basis. When the edge device 112 has performed some processing of the image, but further processing is to be performed by the cloud computing component 102, the edge device 112 may transmit both the image and its track file to the cloud computing component 102.
  • the edge device 112 may transmit the track record to the cloud computing component 102 without transmitting the image associated with the track record. Yet further alternatively, in some examples the edge device 112 may not transmit images or track records to the cloud computing component 102. Instead, the edge device 112 may transmit the track file to the cloud computing component 102 without including the underlying images or track records from which the track file was derived.
  • the cloud computing component 102 receives data from the edge device 112 (or a plurality of edge devices 112). In other examples, the cloud computing component 102 receives data directly from the image capture device 110 (or a plurality of image capture devices 110).
  • the received data can include images captured by the image capture device 110, track records generated by the image capture device 110, track records modified by the edge device 112, and/or track files generated by the edge device 112.
  • the cloud computing component 102 in at least some examples, generates one or more track folders that collate the data from at least some of the various received track files (e.g., such as received track files matching filtering criteria of the cloud computing component 102). Based on the track folders, the cloud computing component 102 generates and outputs reporting information. For example, the cloud computing component 102 may analyze the contents of the track folders and output reporting information to a display screen or user interface in a human readable manner.
  • the cloud computing component 102 further performs processing of the received data.
  • the processing is, in some examples, artificial intelligence processing.
  • the processing can include processing of images, processing of track records, processing of track files, or processing of track folders derived from the received data.
  • the processing can, for example, determine demographic information of a consumer depicted in an image received by the cloud computing component 102.
  • the processing can reveal trends or other insights related to data received in a plurality of track records and or in track files.
  • the processing can reveal trends or other insights related to the track records.
  • the cloud computing component 102 determines and transmits one or more control signals to the edge device 112 and/or the image capture device 110.
  • control signals may control what processing is done by the image capture device 110 and/or the edge device 112, what information is included in a track record and/or a track file, a frequency of image capture of the image capture device 110, or any other suitable parameter of operation of the image capture device 110 and/or the edge device 112.
  • the cloud computing component 102 performs one or more additional functions.
  • the cloud computing component 102 may host or otherwise provide a portal (e.g., a software construct) that enables a user to estimate costs, order, and/or purchase the display 108 the image capture device 110, the edge device 112, and/or other point- of-sale or marketing related items, provide settings or other input for the image capture device 110, and/or review data captured, determined, calculated, or otherwise provided by the image capture device 110, the edge device 112, and/or the cloud computing component 102.
  • a user may interface with the portal to register an account for use in conjunction with an image capture device 110.
  • the user may register the account via a digital identity, such as a digital identity stored on, and managed by, a blockchain, Hyperledger, or other form of immutable and trusted digital construct.
  • a digital identity such as a digital identity stored on, and managed by, a blockchain, Hyperledger, or other form of immutable and trusted digital construct.
  • the digital identity may enable the user to register the account for use in conjunction with an image capture device 110 while also maintaining anonymity and full control and ownership of data of the user.
  • the user may access the portal to perform various functions. For example, the user may access the portal to order the display 108 and/or the image capture device. In at least one implementation, the user may access the portal to identify a third-party printer to produce the display 108 and include, embed, or otherwise implement the image capture device 110 with the display 108. In some examples, the portal may provide mechanisms for the user to upload and provide graphics or other data to the third-party printer. In some examples, the portal may also provide mechanisms for the user to pay the third-party printer via the portal. In various other implementations, the portal may enable the user to interact with other third-parties, such as to contract with the third-parties for services, goods, or data, either to be received by, or provided by, the user.
  • the portal may enable the user to interact with other third-parties, such as to contract with the third-parties for services, goods, or data, either to be received by, or provided by, the user.
  • the user may also access the portal to review data, such as a data profile that may relate to the observed environment 104, a particular retailer, a particular type of products, a particular demographic, etc.
  • data such as a data profile that may relate to the observed environment 104, a particular retailer, a particular type of products, a particular demographic, etc.
  • the user may access the portal to review data provided by a vendor, data provided by a retailer, or data provided by any other suitable third- party or source to determine insights about demographics in a certain environment (e.g., such as the observed environment 104 while the user is evaluating whether to place the display 108 with the image capture device 110 in the observed environment 104).
  • the user may access the portal to review data captured by, or derived from data captured by, an image capture device 110 purchased by, and currently or previously deployed in the observed environment 104 on instruction of, the user.
  • the data may be for an image capture device 110 currently deployed in an observed environment 104, as well as data from image capture devices that were previously, but are no longer, deployed, or which have been redeployed in a different environment.
  • the portal may be implemented as a plugin to an Internet web browser software application.
  • the user may also access the portal to provide input to, or change settings of, an image capture device 110.
  • the user may control a frequency with which the images capture device 110 captures images, what demographics are tracked, monitored, or otherwise determined based on the images captured by the image capture device 110, a retention time for data captured by, or determined based on data captured by, the image capture device 110, etc.
  • edge device 112 is generally described herein as interacting with the image capture device 110 and/or the cloud computing component 102, the edge device 112 may have additional functions or capabilities.
  • multiple edge devices 112 may be present in an observed environment 104 and arranged so as to form or function as a mesh network.
  • Third-parties may license access to use the edge devices 112.
  • a third-party may use the mesh network as a communication backbone to provide services to consumers in the observed environment 104.
  • a user may enter the observed environment 104 with a smartphone or other electronic device and may launch an application on the electronic device.
  • the user may search for available connections via a menu of the electronic device and may select a network provided by a retailer or other party responsible for the observed environment 104.
  • the network used by customers may be provided and maintained by the retailer.
  • certain third-parties may wish to provide Internet-based services to users in the observed environment 104 while not using the network provided by the retailer.
  • the retailer may incentivize third-parties to use a network other than the network provided by the retailer, such as to reduce a resource load of the network provided by the retailer.
  • the third-party may contract with a provider of the edge device 112 to use the edge device 112 for communication in place of the network provided by the retailer.
  • the application executing on the consumer’s electronic device may determine that the edge device 112 is present and may communicate, such as with the cloud computing component 102 or another device, via the edge device 112.
  • the image capture device 110 can include a system on a chip (SoC) 202 that includes a processor 204, a memory 206, a camera 208, and an antenna 210.
  • SoC system on a chip
  • the camera 208 is a device of any process technology suitable for capturing an image, the scope of which is not limited herein.
  • the images captured by the camera 208 can be in the format of analog images, digital images, which can include one or more image frames.
  • the image capture device 110 can directly generate digital image frames.
  • the camera 208 can be positioned to capture images of an area where the viewers of the products or the people being observed are generally expected to be.
  • the image capture device 110 may include more than one camera 208.
  • the image capture device 110 further includes a battery 212 and a wireless power receiver 214.
  • the image capture device 110 can optionally include additional sensors such as heat, thermal, or temperature sensors, such as an ultra-low-power thermal imaging sensor. Other sensors such as motion sensors (e.g., vibration sensors, accelerometers, etc.) or the like can also be present.
  • the memory 206 includes an image capture and analysis computer program product 216.
  • the image capture and analysis computer program product 216 controls operation of the SoC 202 to cause the SoC 202 to capture images via the camera 208, process the images via the processor 204, such as according to one or more artificial intelligence algorithms or processes, and transmit the images and/or a result of the processing (e.g., such as a track record) via the antenna 210.
  • the image capture and analysis computer program product 216 is programmable or otherwise modifiable, such as by the edge device 112 and/or the cloud computing component 102, to modify operation of the image capture device 110.
  • the SoC 202 further includes a status LED 218 and a power LED 220.
  • the processor 204 can serve to allow the various functions and controls on the image capture device 110 to be performed along with controlling the communication interface used by the image capture device 110 to communicate with the edge device and/or cloud computing component 102.
  • the processor 204 may be a regular computer, a special purpose computer, or a specialized microcontroller such as a low power processor used to provide control and processing functions with a lower power consumption on the device.
  • the wireless power receiver 214 comprises one or more solar power cells arranged into a solar power panel.
  • the solar power cells are dye sensitizes solar cells that facilitate harvesting energy from indoor light sources such as fluorescent, incandescent, and/or LED lights.
  • the processor 204 is coupled to the memory 206, the camera 208, the antenna 210, and the battery 212.
  • the processor 204 is further coupled to the wireless power receiver 214.
  • the battery 212 in at least some examples, further couples to the wireless power receiver 214.
  • an energy harvesting capacity of the wireless power receiver may be directly proportional to a surface area of the wireless power receiver 214.
  • a wireless power receiver 214 having a first surface area has a capacity for harvesting a greater amount of energy than another wireless power receiver having a second surface area smaller than the first surface area.
  • each solar power cell is approximately 12 millimeters (mm) x 100 mm in size.
  • each solar cell is capable of harvesting or generating approximately 0.3 milliwatts (mW) of power in an environment with lighting having a luminance of about 1000 lux (such as an environment lit by incandescent or fluorescent light) and approximately 0.6 mW of power in an environment having a luminance of about 2000 lux (such as an environment lit by LED light). Also in that example, each solar cell is capable of generating a direct current voltage of about 0.5 volts (V).
  • the solar power cells may be configured and coupled together in a multi-cell configuration to increases a power and voltage generation capability of the multi-cell configuration.
  • the wireless power receiver would be capable of generating or harvesting about 3 mW per hour of power from a light source at about 1000 lux and about 6 mW per hour of power from a light source at about 2000 lux, each with a voltage of about 5 V.
  • the wireless power receiver 214 can be somewhat remote from the image capture device 110.
  • the wireless power receiver 214 can be placed in a location accessible by light, such as on top of a shelf or display.
  • a wire or other connection can then be used to connect the wireless power receiver 214 to the image capture device 110. This allows for positioning of the wireless power receiver 214 in a position to harvest power while allowing the image capture device 110 to be placed in a position to capture information as desired.
  • the SoC 202 can be an ESP-EYE computing platform such that the processor 204 is a TENSILICA LX6 processor with WIFI and BLUETOOTH functionality and the camera 208 includes a 2-megapixel sensor (e.g., the camera 208 is a 2- megapixel camera).
  • the SoC 202 consumes about 62 mW-hour (mWhr) of power per day when capturing, processing, and transmitting one image per minute over a 12-hour period of time.
  • the wireless power receiver 214 For the wireless power receiver 214 discussed above and capable of generating 3mW of power, the wireless power receiver 214 generates or harvests about 72 mWhr of power per 24-hour period when exposed to light of an illuminance of 1000 lux for the same period. Thus, the power requirement of the SoC 202 is less than the power suppliable by the wireless power receiver 214 comprising 10 solar power cells. For the wireless power receiver 214 discussed above and capable of generating 6mW of power, the wireless power receiver 214 generates or harvests about 144 mWhr of power per 24-hour period when exposed to light of an illuminance of 2000 lux for the same period.
  • This increased power harvesting facilitates operation of the SoC 202 for a longer period of time (e.g., about 24 hours) when capturing, processing, and transmitting 1 image per second, or enabling the SoC 202 to capture, process, and transmit more than 1 image per second.
  • the image capture device 110 further includes cellular communication functionality, such as a LTE modem.
  • the image capture device 110 may communicate with the cloud computing component 102 without using the edge device 112 as a bridge. Accordingly, in at least some examples in which the image capture device 110 includes cellular communication functionality, the edge device 112 of the system 100 may be omitted.
  • the image capture device 110 can include and/or communicate using cellular communication functionality when a power consumed by the using the communication functionality, combined with a power consumed by other components of the SoC 202, does not exceed a power harvested or generated by the wireless power receiver 214 in a given time period.
  • the method 300 is, in at least some examples, a method of initialization of the image capture device 110.
  • the method 300 is executed, in some examples, when the image capture device 110 is first activated or powered-on.
  • the battery 212 can be electrically coupled to the image capture device 110.
  • the battery 212 can be electrically coupled to the image capture device 110 by a user removing a non-conductive isolator from between adjacent electrical contacts of the image capture device 110 and the battery 212.
  • the non-conductive isolator can be, for example, a pull tab or tape.
  • the battery 212 is electrically coupled to the image capture device 110 by connecting an electrical conductor (e.g., a wire or cable) between the battery 212 and the image capture device 110.
  • the image capture device 110 powers on and initializes a power on self-test (POST).
  • POST power on self-test
  • the image capture device 110 determines whether a received system voltage is sufficient for normal operation of the image capture device 110. When the received voltage is sufficient for normal operation of the image capture device 110, the method 300 proceeds to operation 308. When the received voltage is not sufficient for normal operation of the image capture device 110, the method 300 returns to operation 304.
  • the image capture device 110 determines whether a charge of the battery 212 is greater than a predetermined threshold. In at least some examples, the predetermined threshold can be about five percent, or the predetermined threshold can represent a minimum operating voltage supplied by the battery to indicate a level of charge. When the battery charge is greater than the predetermined threshold, the method 300 proceeds to operation 310. When the battery charge is not greater than the predetermined threshold, the method 300 proceeds to operation 314.
  • the image capture device 110 determines whether other components of the image capture device 110 are tumed-on. These other components can include, for example, the camera 208 and/or a communication interface (e.g. radio) of the SoC 202. When the other components of the image capture device 110 are tumed-on, the method 300 proceeds to operation 312. When the other components of the image capture device 110 are not tumed-on, the method 300 proceeds to operation 314
  • the image capture device 110 causes the status LED 218 to indicate that the image capture device 110 is active (e.g., the LED 218 can flash red, remain on, etc.). After activating the status LED 218 and causing the status LED 218 to indicate an operating state, the method 300 can proceed to operation 316.
  • the image capture device 110 can cause the status LED 218 to turn on having a red color for a predetermined period of time before turning off.
  • the predetermined period of time is about two seconds.
  • the image capture device 110 determines whether a wireless network is detected.
  • the wireless network may be of any suitable short-range or long-range protocol.
  • the method 300 returns to operation 314.
  • the method 300 proceeds to operation 318.
  • the image capture device 110 causes the status LED 218 to indicate that a network is detected (e.g., flash yellow, remain yellow or a different color, etc.). Alternatively, in some examples, the image capture device 110 causes a different LED to flash yellow or indicate that a network is detected. After activating the status LED 218 and causing the status LED 218 to indicate that a network is detected, the method 300 proceeds to operation 320.
  • the image capture device 110 determines whether it is communicative connected to the network detected at the operation 316. When the image capture device 110 is not communicatively connected to the network, the method 300 returns to operation 318. When the image capture device 110 is communicatively connected to the network, the method 300 proceeds to operation 322. [0071] At operation 322, the image capture device 110 causes the power LED 220 to indicate an active state of the image capture device 110 (e.g., flash green, remain green, etc.). After activating the power LED 220 and causing the power LED 220 to indicate that the image capture device 110 is working, the method 300 proceeds to operation 324.
  • an active state of the image capture device 110 e.g., flash green, remain green, etc.
  • the image capture device 110 captures an image via the camera 208 and transmits the image with a result or report of the POST.
  • the transmission is to the cloud computing component 102.
  • the transmission is to the edge device 112.
  • the image capture device 110 can process the image prior to transmission to determine one or more characteristics or metrics and generate a track record, as discussed in greater detail elsewhere herein. In such examples, the image capture device may include the track record in the transmission. After the image capture device 110 sends the transmission, the method 300 proceeds to operation 326.
  • the image capture device 110 sets RTC to one minute. After setting RTC to one minute, in at least some examples, the image capture device enters a hibernation. Entering hibernation, in at least some examples, preserves power of the image capture device 110
  • the method 400 is, in at least some examples, a method of image capture and processing.
  • the method 400 is, in some examples, performed by the image capture device 110. In other examples, the method 400 is performed partially by the image capture device 110 and partially by the edge device 112.
  • the image capture device 110 begins operation and the method 400 proceeds to either operation 404 or operation 406.
  • operation 404 the image capture device 110 is configured for capturing images based on motion detection, sometimes referred to as a motion mode
  • the method 400 proceeds to operation 404.
  • the image capture device 110 is configured for timed captured of images, sometimes referred to as a sync mode
  • the method 400 proceeds to operation 406.
  • the image capture device 110 initializes the motion mode and the method 400 proceeds to operation 408.
  • the image capture device 110 initializes the sync mode and the method 400 proceeds to operation 410.
  • the image capture device 110 determines whether a delay period between capturing images has expired (e.g., whether an elapsed time since a last image was captured has reached a predetermined threshold value). When the delay period between capturing images has not expired, the method 400 returns to operation 402. When the delay period between capturing images has expired, the method 400 proceeds to operation 412.
  • the image capture device 110 determines whether a person is within a predetermined distance of the image capture device 110.
  • the predetermined distance is about 5 feet, within about 10 feet, or without about 15 feet, though other distances can be used as thresholds as selectably set by the system.
  • the method 400 returns to operation 402.
  • the method 400 proceeds to operation 412.
  • the image capture device 110 captures an image and the method 400 then proceeds to operation 414.
  • the method 400 proceeds from operation 412 to operation 426, omitting intervening operations.
  • the following operations 414 through 426 may be performed by the edge device 112.
  • at least some of the following operations 414 through 426 may be performed by the image capture device 110.
  • the image captured at operation 412 is processed to determine whether a face is detected in the captured image.
  • the processing is performed, in at least some examples, according to artificial intelligence and/or facial detection algorithms.
  • the method 400 returns to operation 402.
  • a face counter is incremented at operation 416 and the method 400 proceeds to operation 418.
  • the face counter in at least some examples, is an incremental counter that tracks a number of faces detected in images captured by the image capture device 110
  • the image captured at operation 412 is processed to determine whether the face that was detected at operation 414 is viewing the image capture device 110 within a margin of tolerance.
  • the tolerance is plus or minus between about 1 and 10 degrees, or about five degrees.
  • the processing is performed, in at least some examples, according to artificial intelligence algorithms.
  • the method 400 returns to operation 402.
  • a viewer counter is incremented at operation 420 and the method 400 proceeds to operation 422.
  • the viewer counter in at least some examples, is an incremental counter that tracks a number of viewers captured in images viewing an area around the image capture device 110 within the margin of tolerance.
  • key metrics can be captured and a track record can be generated at operation 424.
  • the method 400 proceeds to operation 426.
  • the track record generated at operation 424, and in some examples the image captured at operation 412 are transmitted.
  • the transmission is from the image capture device 110 to the edge device 112.
  • the transmission is from the image capture device 110 to the cloud computing component 102.
  • the transmission is from the edge device 112 to the cloud computing component 102.
  • the operation 426 is omitted, such as when the edge device 112 performs the operation 416 to the operation 424 and the edge device 112 will continue to perform additional processing on the image captured at operation 414.
  • the method 500 is, in at least some examples, a method of image processing.
  • the method 500 is, in some examples, performed by the edge device 112. In other examples, the method 500 is performed partially by the edge device 112 and partially by the cloud computing component 102. In yet other examples, the method 500 is performed by cloud computing component 102.
  • an image and a track record are received.
  • the image and the track record are received by the edge device 112 from the image capture device 110.
  • the image and the track record are received by the cloud computing component 102 from the image capture device 110.
  • the image and the track record are received by the cloud computing component 102 from the edge device 112.
  • the operation 502 is omitted, such as when a device implementing the method 500 has received or otherwise acquired the image and the track record, such as if the device implementing the method 500 also captured the image or has already performed at least some processing of the image.
  • the image is processed to determine a gender of a person depicted in the image.
  • the processing is performed, in at least some examples, according to artificial intelligence algorithms. If the gender is determined from the image, the track record is updated at operation 506.
  • the method 500 then proceeds to operation 508.
  • the track record is updated with the determined gender, the track record is further updated with a confidence level.
  • the confidence level indicates a determined or estimated confidence in an accuracy of the determined gender.
  • the image is processed to determine an ethnicity of a person depicted in the image.
  • the processing is performed, in at least some examples, according to artificial intelligence algorithms. If the ethnicity is determined from the image, the track record is updated at operation 510.
  • the method 500 proceeds to operation 512.
  • the track record is updated with the determined ethnicity, the track record is further updated with a confidence level.
  • the confidence level indicates a determined or estimated confidence in an accuracy of the determined ethnicity.
  • the image is processed to determine an age of a person depicted in the image.
  • the processing is performed, in at least some examples, according to artificial intelligence algorithms.
  • the age is represented as a numerical value or a range of numerical values. In other examples, the age is represented as a generalized category or classification such as child, youth, adult, senior, etc.
  • the track record is updated at operation 514. The method 500 then proceeds to operation 516. In at least some examples, when the track record is updated with the determined age, the track record is further updated with a confidence level. The confidence level indicates a determined or estimated confidence in an accuracy of the determined age.
  • multiple track records are compared to determine at operation 518 whether the track records depict a same person.
  • the method 500 proceeds to operation 520.
  • the track records do not depict the same person, the method 500 proceeds to operation 522.
  • a dwell time is incremented.
  • the dwell time in at least some examples, is an incremental counter that tracks an amount of time that a person is detected as viewing the image capture device 110..
  • a track file is updated and transmitted.
  • the track file collates data from multiple track records, as discussed elsewhere herein.
  • the track file may represent a store level view of data for multiple image capture devices 110 and/or an image capture device 110 over time.
  • the transmission of the track file is, in some examples, to the cloud computing component 102 and/or a data store accessible by the cloud computing component 102.
  • the method 600 is, in at least some examples, a method of data management and control.
  • the method 600 is, in some examples, performed by the cloud computing component 102.
  • reporting parameters are specified.
  • the reporting parameters are specified, in some examples, based on or according to input received from users.
  • the reporting parameters are specified according to artificial intelligence processes or algorithms.
  • the reporting parameters are specified by a combination of input received from users and artificial intelligence processes or algorithms.
  • the reporting parameters in some examples, specify which parameters the cloud computing component 102 will output for review by users, such as via a dashboard or other graphical user interface.
  • the reporting parameters specify which parameters the cloud computing component 102 will determine, or attempt to determine, based on analyzing and/or processing received track files.
  • the cloud computing component 102 receives and stores one or more track files.
  • the track files are received, in some examples, as inputs from one or more reporting locations, such as observed environments (e.g., stores).
  • the track files are stored by the cloud computing component 102 in a track file database.
  • the track files are stored together as a track record that collates the data of the track files, as described above herein.
  • the cloud computing component 102 analyzes the track files received at operation 604 and/or track files received as input from the track file database. Multiple forms of analysis are capable at operation 606, including at least analysis or processing according to one or more artificial intelligence algorithms. For example, the cloud computing component 102 may analyze the track files to identify trends in the track files or insights derived from the track files. The cloud computing component 102 may analyze the track files to determine trends or insights for a particular observed environment, for a particular vendor, for a particular product or display, for observed environments in a particular region, for all observed environments, etc.
  • the cloud computing component 102 updates one or more metrics for one or more observed environments based on the analysis. For example, the cloud computing component 102 may update metrics related to averages, trends, minimums, maximums, or any other suitable or user-requested metrics derived from the track files for one or more observed environments. In at least some examples, the updated metrics are stored to a track file and/or track record in the track file database.
  • the cloud computing component 102 analyzes the status of one or more image capture devices 110.
  • the cloud computing component 102 may analyze status or other control information regarding the image capture device 110 that is included in one or more of the track files stored in the track file database.
  • the analysis is a power management analysis that analyzes power related statuses and/or image capture frequency of the image capture device(s) 110.
  • at least some information regarding the analysis is stored to a track file and/or track record in the track file database.
  • the cloud computing component 102 modifies control parameters for one or more image capture devices 110.
  • the cloud computing component 102 may modify a frequency of image capture by the image capture device(s) 110 or a mode of image capture (e.g., between motion or sync modes) by the image capture device(s) 110.
  • the cloud computing component 102 may further, or alternatively, modify processing options that are performed by the image capture device(s) 110 and/or by the edge device(s) 112, information provided by the by the image capture device(s) 110 and/or by the edge device(s) 112, etc.
  • the modified control parameters are sent to one or more of the observed environments from which track files were received at operation 604.
  • At least some information regarding the modified control parameters is stored to a track file and/or track record in the track file database.
  • the modification to the control parameters performed at operation 612 is based on results of the analysis of the status of the image capture device(s) performed at operation 610.
  • the modification of the control parameters may be for any suitable purpose, such as to increase a frequency of image capture to obtain more information, reduce a frequency of image capture to reduce power consumption of the image capture device(s) 110,
  • the method 600 omits operation 612 such that the method 600 proceeds from operation 610 to operation 614 when the cloud computing component 102 determines based on the analysis of operation 610 that no changes are to be made to control parameters of the image capture devices 110.
  • the cloud computing component 102 updates an information dashboard.
  • the information dashboard is, in some examples, a graphical user interface that reports information to the users from which input was received at operation 602. For example, in some implementations the information dashboard reports at least some parameters set at operation 602.
  • the information is, in some examples, store metrics and/or device status information, such as discussed above with respect to operation 608 and/or operation 610.
  • at least some information regarding the updated information dashboard is stored to a track file and/or track record in the track file database.
  • the images captured by the image capture device can be processed to obtain various information from the images such as demographic information, spacing information, temperature data, etc.
  • various types of machine learning algorithms such as facial recognition algorithms, demographic algorithms and the like can be used to obtain the information from the images.
  • the images can be processed using an information determination algorithm(s) as part of an image processing system that can process an image to determine various information such as demographic information. Different imaging processing algorithms may be used to process an image and, thus, are suitable to generate different types of information or emphasize on different efficiency factors (e.g., speed vs. accuracy).
  • the information determination algorithm(s) can be configured to retrieve an image frame and send the digital image frame data to an image processing engine, to receive the demographic-map data from the image processing engine, and send the demographic-map data to an information server that can store the obtained information.
  • the information determination algorithm(s) can comprise one or more algorithms can be used for one or more types of information. For example, a separate determination algorithm may be used for each type of demographic information, and in some embodiments, multiple determination algorithms can be used for the same demographic information, where a confidence interval for each algorithm can be used to determine a final classification of the information from the image.
  • An interface available to a user of the system can be configured to allow a user to selectively determine which demographic features are determined from the images obtained by the image capture device, stored, and presented to the user.
  • the information determination algorithm(s) can be configured to collect personal attributes such as gender, age, ethnicity, height, skin color, hair color, hair length, facial hair, weight, static/in motion, accessories, stroller, glasses, beard, dwell time, temperature, and the like for each person in the images.
  • the image processing system can also be configured to collect data relating to a person's behavior such as the dwell time at the scene (e.g., how long a person has stayed at the scene) or attentiveness (e.g., whether a person is paying attention to the products and/or display).
  • a person's behavior such as the dwell time at the scene (e.g., how long a person has stayed at the scene) or attentiveness (e.g., whether a person is paying attention to the products and/or display).
  • those configurations can be set up in a configuration file, which will be read by the image processing system.
  • the information determination algorithm(s) can comprise one or more algorithms or techniques to process the features of the images in order to detect persons in the image frame as well as the personal attributes of each person.
  • the image processing engine can use one or more of a naive/normal classifier, binary decision trees, boosting techniques, random trees, a Haar classifier, and a Viola-Jones classifier, neural networks, Bayesian classifiers, multivariate processors.
  • the image processing engine can also use other techniques such as a k-means cluster.
  • the image processing engine can adjust the weight/importance factors of some or all of the features and according to which the features are evaluated by the image processing engine to detect different attributes.
  • FIG. 7 illustrates a computer system 700 suitable for implementing one or more embodiments disclosed herein.
  • the computer system 700 includes a processor 781 (which may be referred to as a central processor unit or CPU, a computing or processing node, etc.) that is in communication with memory devices including secondary storage 782, read only memory (ROM) 783, random access memory (RAM) 786, input/output (I/O) devices 785, and network connectivity devices 786.
  • the processor 781 may be implemented as one or more CPU chips.
  • a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design.
  • a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an application specific integrated circuit (ASIC), because for large production runs the hardware implementation may be less expensive than the software implementation.
  • ASIC application specific integrated circuit
  • a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software.
  • the processor 781 may execute a computer program or application.
  • the processor 781 may execute software or firmware stored in the ROM 783 or stored in the RAM 784.
  • the processor 781 may copy the application or portions of the application from the secondary storage 782 to the RAM 784 or to memory space within the processor 781 itself, and the processor 781 may then execute instructions that the application is comprised of.
  • the processor 781 may copy the application or portions of the application from memory accessed via the network connectivity devices 786 or via the I/O devices 785 to the RAM 784 or to memory space within the processor 781, and the processor 781 may then execute instructions that the application is comprised of.
  • an application may load instructions into the processor 781, for example load some of the instructions of the application into a cache of the processor 781.
  • an application that is executed may be said to configure the processor 781 to do something, e.g., to configure the processor 781 to perform the function or functions promoted by the subject application.
  • the processor 781 is configured in this way by the application, the CPU 782 becomes a specific purpose computer or a specific purpose machine.
  • the secondary storage 782 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 784 is not large enough to hold all working data. Secondary storage 782 may be used to store programs which are loaded into RAM 784 when such programs are selected for execution.
  • the ROM 783 is used to store instructions and perhaps data which are read during program execution. ROM 783 is a non-volatile memory device which typically has a small memory capacity relative to the larger memory capacity of secondary storage 782.
  • the RAM 784 is used to store volatile data and perhaps to store instructions. Access to both ROM 783 and RAM 784 is typically faster than to secondary storage 782.
  • the secondary storage 782, the RAM 784, and/or the ROM 783 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media.
  • I/O devices 785 may include printers, video monitors, liquid crystal displays (LCDs), LED displays, touch screen displays, keyboards, keypads, switches, dials, mice, track balls, voice recognizers, card readers, paper tape readers, or other well-known input devices.
  • LCDs liquid crystal displays
  • LEDs LED displays
  • touch screen displays keyboards, keypads, switches, dials, mice, track balls
  • voice recognizers card readers, paper tape readers, or other well-known input devices.
  • the network connectivity devices 786 may take the form of modems, modem banks, Ethernet cards, universal serial bus (USB) interface cards, serial interfaces, token ring cards, fiber distributed data interface (FDDI) cards, wireless local area network (WLAN) cards, radio transceiver cards that promote radio communications using protocols such as code division multiple access (CDMA), global system for mobile communications (GSM), long-term evolution (LTE), worldwide interoperability for microwave access (WiMAX), near field communications (NFC), radio frequency identity (RFID), and/or other air interface protocol radio transceiver cards, and other well-known network devices. These network connectivity devices 786 may enable the processor 781 to communicate with the Internet or one or more intranets.
  • CDMA code division multiple access
  • GSM global system for mobile communications
  • LTE long-term evolution
  • WiMAX worldwide interoperability for microwave access
  • NFC near field communications
  • RFID radio frequency identity
  • RFID radio frequency identity
  • the processor 781 might receive information from the network, or might output information to the network (e.g., to an event database) in the course of performing the above-described method steps.
  • information which is often represented as a sequence of instructions to be executed using processor 781, may be received from and outputted to the network, for example, in the form of a computer data signal embodied in a carrier wave.
  • Such information may be received from and outputted to the network, for example, in the form of a computer data baseband signal or signal embodied in a carrier wave.
  • the baseband signal or signal embedded in the carrier wave may be generated according to several methods well-known to one skilled in the art.
  • the baseband signal and/or signal embedded in the carrier wave may be referred to in some contexts as a transitory signal.
  • the processor 781 executes instructions, codes, computer programs, scripts which it accesses from hard disk, floppy disk, optical disk (these various disk-based systems may all be considered secondary storage 782), flash drive, ROM 783, RAM 784, or the network connectivity devices 786. While only one processor 781 is shown, multiple processors may be present. Thus, while instructions may be discussed as executed by a processor, the instructions may be executed simultaneously, serially, or otherwise executed by one or multiple processors.
  • the computer system 700 may comprise two or more computers in communication with each other that collaborate to perform a task.
  • an application may be partitioned in such a way as to permit concurrent and/or parallel processing of the instructions of the application.
  • the data processed by the application may be partitioned in such a way as to permit concurrent and/or parallel processing of different portions of a data set by the two or more computers.
  • virtualization software may be employed by the computer system 700 to provide the functionality of a number of servers that is not directly bound to the number of computers in the computer system 700. For example, virtualization software may provide twenty virtual servers on four physical computers.
  • Cloud computing may comprise providing computing services via a network connection using dynamically scalable computing resources.
  • Cloud computing may be supported, at least in part, by virtualization software.
  • a cloud computing environment may be established by an enterprise and/or may be hired on an as-needed basis from a third party provider.
  • Some cloud computing environments may comprise cloud computing resources owned and operated by the enterprise as well as cloud computing resources hired and/or leased from a third party provider.
  • the computer program product may comprise one or more computer readable storage medium having computer usable program code embodied therein to implement the functionality disclosed above.
  • the computer program product may comprise data structures, executable instructions, and other computer usable program code.
  • the computer program product may be embodied in removable computer storage media and/or non removable computer storage media.
  • the removable computer readable storage medium may comprise, without limitation, a paper tape, a magnetic tape, magnetic disk, an optical disk, a solid state memory chip, for example analog magnetic tape, compact disk read only memory (CD-ROM) disks, floppy disks, jump drives, digital cards, multimedia cards, and others.
  • the computer program product may be suitable for loading, by the computer system 700, at least portions of the contents of the computer program product to the secondary storage 782, to the ROM 783, to the RAM 784, and/or to other non-volatile memory and volatile memory of the computer system 700.
  • the processor 781 may process the executable instructions and/or data structures in part by directly accessing the computer program product, for example by reading from a CD-ROM disk inserted into a disk drive peripheral of the computer system 700.
  • the processor 781 may process the executable instructions and/or data structures by remotely accessing the computer program product, for example by downloading the executable instructions and/or data structures from a remote server through the network connectivity devices 786.
  • the computer program product may comprise instructions that promote the loading and/or copying of data, data structures, files, and/or executable instructions to the secondary storage 782, to the ROM 783, to the RAM 784, and/or to other non-volatile memory and volatile memory of the computer system 700.
  • the secondary storage 782, the ROM 783, and the RAM 784 may be referred to as a non-transitory computer readable medium or a computer readable storage media.
  • a dynamic RAM embodiment of the RAM 784 may be referred to as a non- transitory computer readable medium in that while the dynamic RAM receives electrical power and is operated in accordance with its design, for example during a period of time during which the computer system 700 is turned on and operational, the dynamic RAM stores information that is written to it.
  • the processor 781 may comprise an internal RAM, an internal ROM, a cache memory, and/or other internal non-transitory storage blocks, sections, or components that may be referred to in some contexts as non-transitory computer readable media or computer readable storage media.
  • any one or more of the operations recited herein include one or more sub-operations. In some examples any one or more of the operations recited herein is omitted. In some examples any one or more of the operations recited herein is performed in an order other than that presented herein (e.g., in a reverse order, substantially simultaneously, overlapping, etc.). Each of these alternatives is intended to fall within the scope of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Dans un mode de réalisation, un dispositif comprend un panneau solaire à pigment photosensible, conçu pour collecter de l'énergie à partir de l'énergie émise en tant que lumière par une source de lumière intérieure, une caméra, ainsi qu'un processeur couplé au panneau solaire à pigment photosensible et à la caméra. Le processeur peut être conçu pour : recevoir au moins une partie de l'énergie collectée par le panneau solaire à pigment photosensible, capturer une image par l'intermédiaire de la caméra et transmettre l'image à un dispositif informatique. Une batterie peut être couplée au panneau solaire à pigment photosensible et au processeur; laquelle peut être conçue pour stocker au moins une partie de l'énergie collectée par le panneau solaire à pigment photosensible et fournir au moins une partie de l'énergie stockée au processeur.
PCT/US2021/035369 2020-06-02 2021-06-02 Système et traitement de capture d'images WO2021247649A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/061,268 US20230114454A1 (en) 2020-06-02 2022-12-02 Information capture system and processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063033394P 2020-06-02 2020-06-02
US63/033,394 2020-06-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/061,268 Continuation-In-Part US20230114454A1 (en) 2020-06-02 2022-12-02 Information capture system and processing

Publications (1)

Publication Number Publication Date
WO2021247649A1 true WO2021247649A1 (fr) 2021-12-09

Family

ID=78830521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/035369 WO2021247649A1 (fr) 2020-06-02 2021-06-02 Système et traitement de capture d'images

Country Status (1)

Country Link
WO (1) WO2021247649A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049848A1 (en) * 2006-07-21 2008-02-28 Turnbull Robert R Method and system for reducing signal distortion in a continuously variable slope delta modulation scheme
WO2013043590A1 (fr) * 2011-09-23 2013-03-28 Shoppertrak Rct Corporation Système et procédé permettant de détecter, de suivre et de compter des objets humains d'intérêt au moyen d'un système de comptage et d'un dispositif de capture de données
US20140225992A1 (en) * 2011-08-12 2014-08-14 Intuitive Surgical Operations, Inc. Increased resolution and dynamic range image capture unit in a surgical instrument and method
WO2018203512A1 (fr) * 2017-05-05 2018-11-08 Arm K.K. Procédés, systèmes et dispositifs de détection d'interactions d'utilisateur
US20190148083A1 (en) * 2013-05-17 2019-05-16 Exeger Operations Ab Dye-sensitized solar cell unit and a photovoltaic charger including the solar cell unit
US20200074154A1 (en) * 2010-06-07 2020-03-05 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation in vehicles
WO2020076356A1 (fr) * 2018-10-08 2020-04-16 Google Llc Systèmes et procédés pour fournir une rétroaction pour des dispositifs de capture d'image basée sur l'intelligence artificielle
US20200158293A1 (en) * 2008-07-24 2020-05-21 Lumigrow, Inc. Lighting system for growing plants

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049848A1 (en) * 2006-07-21 2008-02-28 Turnbull Robert R Method and system for reducing signal distortion in a continuously variable slope delta modulation scheme
US20200158293A1 (en) * 2008-07-24 2020-05-21 Lumigrow, Inc. Lighting system for growing plants
US20200074154A1 (en) * 2010-06-07 2020-03-05 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation in vehicles
US20140225992A1 (en) * 2011-08-12 2014-08-14 Intuitive Surgical Operations, Inc. Increased resolution and dynamic range image capture unit in a surgical instrument and method
WO2013043590A1 (fr) * 2011-09-23 2013-03-28 Shoppertrak Rct Corporation Système et procédé permettant de détecter, de suivre et de compter des objets humains d'intérêt au moyen d'un système de comptage et d'un dispositif de capture de données
US20190148083A1 (en) * 2013-05-17 2019-05-16 Exeger Operations Ab Dye-sensitized solar cell unit and a photovoltaic charger including the solar cell unit
WO2018203512A1 (fr) * 2017-05-05 2018-11-08 Arm K.K. Procédés, systèmes et dispositifs de détection d'interactions d'utilisateur
WO2020076356A1 (fr) * 2018-10-08 2020-04-16 Google Llc Systèmes et procédés pour fournir une rétroaction pour des dispositifs de capture d'image basée sur l'intelligence artificielle

Similar Documents

Publication Publication Date Title
US10372988B2 (en) Systems and methods for automatically varying privacy settings of wearable camera systems
US9258556B2 (en) Digital signage device capable of entering diagnostic display mode
CN107533357B (zh) 一种显示装置及内容显示系统
CN112862516B (zh) 资源投放方法、装置、电子设备及存储介质
US7643658B2 (en) Display arrangement including face detection
US20120109399A1 (en) Energy resource conservation systems and methods
US20140365644A1 (en) Internet traffic analytics for non-internet traffic
CN105825522A (zh) 图像处理方法和支持该方法的电子设备
JP2015011712A (ja) デジタル情報収集および解析方法およびその装置
WO2015168306A1 (fr) Procédés, systèmes et appareils de surveillance de visiteurs
US20230114454A1 (en) Information capture system and processing
JP2019531558A (ja) 対話式コンテンツ管理
KR20190028030A (ko) 디지털 사이니지 관리 시스템 및 방법
US9858598B1 (en) Media content management and deployment system
WO2021247649A1 (fr) Système et traitement de capture d'images
US20150228034A1 (en) Photo booth system
US9842353B1 (en) Techniques of claiming all available timeslots in media content management and deployment system
US9836762B1 (en) Automatic assignment of media content items to digital signage device based on comparison between demographic information collected at digital signage device and media content metadata
US10074098B2 (en) Demographic information collection and content display based on demographic information
CN207250101U (zh) 新零售场景广告终端
WO2015161357A1 (fr) Système et procédé pour surveiller une activité de dispositif mobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21817151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21817151

Country of ref document: EP

Kind code of ref document: A1