WO2019154112A1 - 入离场状态检测方法和装置 - Google Patents

入离场状态检测方法和装置 Download PDF

Info

Publication number
WO2019154112A1
WO2019154112A1 PCT/CN2019/073120 CN2019073120W WO2019154112A1 WO 2019154112 A1 WO2019154112 A1 WO 2019154112A1 CN 2019073120 W CN2019073120 W CN 2019073120W WO 2019154112 A1 WO2019154112 A1 WO 2019154112A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
state
distance
image acquisition
Prior art date
Application number
PCT/CN2019/073120
Other languages
English (en)
French (fr)
Inventor
韩喆
张晓博
姚四海
吴军
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to SG11202005455YA priority Critical patent/SG11202005455YA/en
Priority to JP2020536646A priority patent/JP6916394B2/ja
Priority to KR1020207018820A priority patent/KR102366681B1/ko
Priority to EP19751705.5A priority patent/EP3716142A4/en
Publication of WO2019154112A1 publication Critical patent/WO2019154112A1/zh
Priority to US16/889,622 priority patent/US11102458B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method and apparatus for detecting an entry and exit state.
  • the monitoring system can automatically determine whether the preset object is admitted or departed, so that the business system can enter the venue according to the preset object and/or The departure status provides an appropriate service.
  • the embodiment of the present application provides a method and device for detecting an entry and exit state, and a corresponding application system, which can automatically detect a preset object entering and leaving the state.
  • an embodiment of the present application provides a method for detecting an inbound and outbound state, which is performed by a snoop detection system, where the method includes:
  • the first preset condition includes at least one of the following:
  • the distance between the object and the target object is less than a first preset threshold
  • the statistical value of the distance between the object and the target object in the first preset time period is less than the second preset threshold
  • the difference between the distance between the object and the target object in the first preset time interval is greater than a third preset threshold
  • the distance between the object and the target object is greater than a fourth preset threshold
  • the statistical value of the distance between the object and the target object in the second preset time period is greater than the fifth preset threshold.
  • the method before determining the state of the target object according to the recognition result obtained by performing object recognition on the image in the image collection area, the method also includes:
  • the object recognition system is a cloud object identification system.
  • the method for detecting the departure state includes the following at least one of the following:
  • the number of preset objects included in the image in the image acquisition area is the number of preset objects included in the image in the image acquisition area.
  • the method for determining the target object according to the recognition result obtained by performing object recognition on the image in the image collection area including the following at least One:
  • Determining that the state of the target object is an admission state when the number of the preset objects included in the image in the image collection area is greater than zero;
  • the method further includes: after determining the state of the target object, the method further includes:
  • the method further includes: after determining the state of the target object, the method further includes:
  • a second instruction is sent to the image acquisition system corresponding to the distance detection range to turn off the image acquisition system or to switch the image acquisition system to a standby mode.
  • the method for sending the second instruction to the image collection system corresponding to the distance detection range includes:
  • the second preset condition includes: a difference between the distance between the object and the target object in the second preset time interval is less than a sixth preset threshold.
  • the method further includes: after determining the state of the target object, the method further includes:
  • the first preset condition is determined according to a state of the target object.
  • the embodiment of the present application provides a method for detecting an entry and exit state, which is performed by an image acquisition system, and the method includes:
  • the object is located within a distance detection range of the monitoring detection system, the distance The detection range corresponds to the image acquisition system;
  • the monitoring detection system determines a state of the target object according to a recognition result obtained by performing object recognition on an image in the image collection area, the target object
  • the status includes the admission status and/or the departure status.
  • the method further includes: after acquiring the image in the image collection area of the image acquisition system, the method further includes:
  • the image acquisition system is turned off or switched to the standby mode.
  • the embodiment of the present application provides an inbound and outbound state detecting device, which is applied to a monitoring and detecting system, and the device includes:
  • the distance monitoring module monitors the distance between the object within the distance detection range and the target object
  • a first instruction sending module when the distance between the object and the target object satisfies a first preset condition, sending a first instruction to the image acquisition system corresponding to the distance detection range, so as to activate the image acquisition system Obtain an image in its image acquisition area;
  • the state determining module determines a state of the target object according to the recognition result obtained by performing object recognition on the image in the image capturing area, and the state of the target object includes an entering state and/or an exiting state.
  • the embodiment of the present application provides an entry and exit state detecting device, which is applied to an image collecting system, and the device includes:
  • the first instruction receiving module receives a first instruction sent by the monitoring detection system when the distance between the object and the target object satisfies the first preset condition; the object is located within a distance detection range of the monitoring detection system, the distance The detection range corresponds to the image acquisition system;
  • the image acquisition module acquires an image in the image collection area of the image acquisition system, so that the monitoring detection system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image collection area.
  • the state of the target object includes an entry state and/or an exit state.
  • an electronic device including:
  • a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the following operations:
  • an embodiment of the present application provides a computer readable storage medium, where the one or more programs are stored, when the one or more programs are executed by an electronic device including multiple applications. , causing the electronic device to perform the following operations:
  • an electronic device including:
  • a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the following operations:
  • the object is located within a distance detection range of the monitoring detection system, the distance The detection range corresponds to the image acquisition system;
  • the monitoring detection system determines a state of the target object according to a recognition result obtained by performing object recognition on an image in the image collection area, the target object
  • the status includes the admission status and/or the departure status.
  • an embodiment of the present application provides a computer readable storage medium, where the one or more programs are stored, when the one or more programs are executed by an electronic device including multiple applications. , causing the electronic device to perform the following operations:
  • the object is located within a distance detection range of the monitoring detection system, the distance The detection range corresponds to the image acquisition system;
  • the monitoring detection system determines a state of the target object according to a recognition result obtained by performing object recognition on an image in the image collection area, the target object
  • the status includes the admission status and/or the departure status.
  • an embodiment of the present application provides an application system, including a monitoring detection system, an image collection system, an object recognition system, and a service system, where:
  • the monitoring detection system monitors a distance between the object within the distance detection range and the target object; and when the distance between the object and the target object satisfies the first preset condition, corresponding to the distance detection range
  • the image acquisition system sends a first instruction to activate the image acquisition system to acquire an image in the image acquisition area thereof; and further determines the target object according to the recognition result obtained by performing object recognition on the image in the image acquisition area.
  • a state the state of the target object includes an entry state and/or an exit state;
  • the image acquisition system receives a first instruction, where the first instruction is sent by the monitoring detection system when the distance between the object and the target object satisfies a first preset condition, and the object is located in the distance detection of the monitoring detection system.
  • the distance detection range corresponds to the image acquisition system; and an image in the image acquisition area of the image acquisition system is also acquired, so that the monitoring detection system performs an object according to an image in the image collection area. Identifying the obtained recognition result, determining a state of the target object, where the state of the target object includes an entry state and/or an exit state;
  • the object recognition system receives an image in the image collection area, performs object recognition on the image in the image collection area, and obtains the recognition result; and returns the recognition result;
  • the business system receives a status of the target object and determines a business process corresponding to the status of the target object.
  • an image acquisition system may be used to acquire an image in an image collection area, and then the collected image is subjected to object recognition, thereby determining a state of the target object according to the recognition result. Therefore, it is possible to more accurately determine whether the preset object is admitted and/or departed.
  • the image acquisition system is activated to acquire the image in the image acquisition area. Therefore, the system power consumption can be effectively reduced to meet the application requirements.
  • FIG. 1 is a schematic structural diagram of an application system to which an embodiment of the present application is applied;
  • FIG. 2 is a schematic flowchart of a method for detecting an entry and exit state performed by a monitoring and detecting system according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an implementation of a scenario in which a self-service restaurant is applied to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for detecting an entry and exit state performed by an image acquisition system according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an entry and exit state detecting apparatus applied to a monitoring and detecting system according to an embodiment of the present application;
  • FIG. 6 is a schematic structural diagram of an entry and exit state detecting device applied to an image acquisition system according to an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for detecting an entry and exit state of an application system according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram showing an architecture of an application system capable of automatically detecting a preset object into an off-field state. It can be understood that the application system can be applied to various application scenarios. For example, self-service restaurants, vending cabinets, or automatic access, and more.
  • the application system may include a monitoring detection system 100, an image acquisition system 200, and a business system 300.
  • the monitoring detection system 100 can monitor the distance between the object entering the distance detection range and the target object 500 to activate the image acquisition system for image acquisition when the distance meets certain conditions.
  • the image acquisition system 200 may acquire images within the image acquisition area after being activated to determine whether the preset object is included in the image acquisition area based on the recognition result of the object recognition of the images.
  • the monitoring detection system 100 and/or the image acquisition system 200 can transmit images within the image acquisition area to the recognition system 400 for object recognition.
  • the image in the image capturing area includes a preset object, it may be determined that the target object is in the entering state; if the image in the image capturing area does not include the preset object, the target object may be determined to be in the field of leaving the field. Based on this, the status information of the target object can be further sent to the service system 300, so that the service system 300 determines the corresponding business process according to the status of the target object.
  • the identification system 400 for object recognition of an image may be an identification system installed locally at the target object or a cloud recognition system installed at a remote location.
  • the method for detecting the inbound and outbound state performed by the monitoring and detecting system may specifically include the following steps:
  • S101 Monitor the distance between the object within the distance detection range and the target object.
  • the monitoring detection system may use a distance sensing module to detect the distance between the object and the target object appearing within the distance detection range of the distance sensing module in real time.
  • the distance sensing module may be disposed at the target object, and the distance between the object and the target object is obtained by detecting the distance between the object and the distance sensing module.
  • the distance sensing module may adopt one or more of an ultrasonic ranging sensor, a laser ranging sensor, and an infrared ranging sensor, as long as the accuracy of the distance monitoring and the specific requirements of the application scenario can be met.
  • the ultrasonic ranging sensor includes a transmitting unit for transmitting ultrasonic waves and a receiving unit for receiving ultrasonic echoes, and the ultrasonic echo ranging principle can be used to detect the distance between two objects.
  • the emitted ultrasonic wave will rebound back after encountering the occlusion object (which may be an object or a human body). Therefore, the ultrasonic distance measuring sensor can calculate the distance traveled by the ultrasonic wave by using the time difference between the transmitted ultrasonic wave and the received ultrasonic echo, thereby obtaining the occlusion.
  • Ultrasonic distance measuring sensors have the advantages of small dead zone, accurate measurement, no contact, and low cost.
  • the ultrasonic ranging sensor may be disposed on the target object to monitor the distance between the object within the distance detection range and the target object.
  • the specific arrangement position and direction of the ultrasonic ranging sensor are adjusted so that the transmitting unit emits ultrasonic waves in a certain direction and starts timing at the time of transmission. Since the ultrasonic wave hits the obstacle when it propagates in the air, it will immediately return. Therefore, the receiving unit stops counting after receiving the reflected wave (corresponding to the ultrasonic echo).
  • the ultrasonic propagation velocity is v
  • the time difference between the ultrasonic wave emitted by the transmitting unit and the ultrasonic wave received by the receiving unit is t
  • the propagation speed of the ultrasonic wave is related to the temperature, the speed changes by less than 1% when the temperature changes by 5 degrees Celsius, so that the sound velocity can be considered to be fixed when the temperature does not change much. This accuracy is usually sufficient for indoor self-service restaurants, vending cabinets and other application scenarios.
  • the ultrasonic ranging sensor it is also possible to measure and monitor the distance using a laser ranging sensor and/or an infrared ranging sensor.
  • the principle of optical ranging is similar to the principle of acoustic ranging. The main difference lies in the time difference between the emitted light and the received light. Since the light wave on which the laser ranging sensor depends is greatly affected by sunlight and the like, it may be more susceptible to interference during the day. Therefore, the laser ranging sensor is more suitable for use at night, for example, in nighttime self-service access control. When the illumination is insufficient, the infrared ranging sensor can also be selected to achieve better ranging accuracy.
  • a plurality of ranging sensors can be used in combination to meet the requirements of different measurement precisions and application scenarios, which is not limited in this embodiment of the present application.
  • the target object may be a dining table 501 in a restaurant (or a buffet cabinet in a restaurant), as shown in FIG. 3 . It is determined whether the corresponding business process is started by detecting whether the human body (which can be understood as the preset object is the human body) is close to or away from the dining table.
  • the ultrasonic ranging sensor 502 (which may also be other types of ranging sensors) may be disposed on the target object (ie, the dining table 501), and the direction in which the transmitting unit transmits the ultrasonic waves is adjusted to be the most likely to approach or away from the dining table.
  • the direction for example, can be placed around the table.
  • the ultrasonic distance measuring sensor 502 can be mounted on the side of the dining table, and the ultrasonic wave is emitted in the horizontal direction, and the distance detecting range 506 of the ultrasonic distance measuring sensor 502 is entered when the human body approaches or moves away from the table. It can be understood that in order to ensure that the human body can be monitored when approaching or away from the table in all directions, a distance measuring sensor can be arranged around the table.
  • signals (sound waves or light waves) emitted by multiple ranging sensors may be disturbed.
  • ranging sensors placed on the table on both sides of the aisle may have a range of distance detection.
  • the ultrasonic waves emitted by one ranging sensor may be received by another ranging sensor, thereby affecting the accuracy of the distance measurement.
  • a plurality of methods may be adopted.
  • a plurality of ranging sensors may be controlled to transmit signals in turn; and, for example, when the distance determination is performed, the detection value whose distance exceeds a certain threshold may be automatically discarded; for example,
  • the distance measuring sensor that transmits the signal at the current time can also be determined according to the reservation condition of the user (here, specifically the diners). For example, when the user enters the self-service restaurant, the scanning code reservation is made, and the table number for preparing the meal is determined (can be embodied) As the ID of the target object, only the table emission signal corresponding to the table number is activated.
  • the distance between the object and the target object can be monitored by performing step S101.
  • the condition for activating the image acquisition system that is, the first preset condition
  • step S107 is performed to determine whether the distance between the object and the target object satisfies the first preset condition. If the first preset condition is met, step S103 is further performed; if the first preset condition is not met, then returning to step S101 to continue monitoring.
  • the content of the first preset condition may be different in different application scenarios.
  • the first preset condition may include at least one of the following:
  • This preset condition can be understood as that the distance is smaller than the first preset threshold, indicating that the distance between the object and the target object is close enough, and the monitored object may need to use the service corresponding to the target object.
  • an object which may be a human body, such as a diners; or an object, such as a cart for finishing a table residue
  • the target object herein embodied as a table
  • diners who need to dine at this table.
  • the monitoring detection system can activate the image acquisition system for image acquisition, and then use the recognition system to perform object recognition on the collected image, etc., thereby determining whether the object close to the table is a preset object (here Turned into a human body): If the object close to the table is a human body, it means that the dining table may be used by the diners, and it can be understood that the target object enters the admission state, and then the business system can be further entered into a business process such as ordering; If the object close to the table is not a human body, it means that there is no table for the diners to use. It can be understood that the target object is in the state of leaving the field and does not need to enter the business system.
  • a preset object here Turned into a human body
  • the value of the distance between the monitored object and the target object may have some glitch signals, which may affect the judgment result. Therefore, the statistical value of the distance between the object and the target object in a certain period of time (for example, the first preset time period) can be calculated, and the statistical value is used to reflect the distance in the time window (ie, the first preset time) The overall measurement of the distance within the segment), thereby eliminating the effect of the glitch signal on the judgment result.
  • the statistic value may be taken as an average value or a median (also referred to as a median value) of the distance measurement values in the first preset time period.
  • the statistical value of the distance in the first preset time period is less than the second preset threshold. It can be understood that the distance between the object and the target object is close enough within a certain time window, and the monitored object may need to be used.
  • the business corresponding to the target object. Taking the application scenario of the self-service container as an example, when an object (which can be set as a human body, such as a container manager; or an object, such as a container, etc.) approaches the target object (herein embodied as a container), it may be necessary to This container loads the container.
  • the monitoring detection system can activate the image acquisition system for image acquisition, and then use the recognition system to perform object recognition on the acquired image, etc., to determine whether the object approaching the container is a preset object (here Turned into a container): If the object close to the container is a container, it means that the container needs to be loaded. It can be understood that the target object is in the entering state, and then the business system can be further entered into the business process such as warehousing, loading and unloading the container; If the object of the container is not a container, it means that the container does not need to be loaded. It can be understood that the target object is in the state of leaving the field and does not need to enter the business system.
  • a preset object here Turned into a container
  • This preset condition can be understood as if the distance between the object being monitored (which can be understood as an object located within the distance detection range) and the target object remains stable at a certain time interval (for example, the first preset)
  • the change value within the time interval is small enough (for example, not greater than the third preset threshold), indicating that the object is likely not moving, or that the movement amplitude has not reached a preset level.
  • the diners approach the table and sit in front of the table to dine, the distance between the diners and the table is usually small and basically stable.
  • the table will always be in the admission state before the diners finish eating and leaving the table. Therefore, the distance between the diners and the dining table changes sufficiently, and the monitoring detection system does not need to activate the image acquisition system for image collection, thereby judging the entry and exit state of the table.
  • this condition can often be combined with other conditions, so as to avoid frequent activation of the image acquisition system without changing the entry and exit state of the target object, which is beneficial to further reduce system power consumption.
  • the distance between the object and the target object is greater than a fourth predetermined threshold.
  • This preset condition can be understood as that the distance is greater than the fourth preset threshold, indicating that the distance between the object and the target object is far enough, and the monitored object may not need to use the service corresponding to the target object.
  • an object which may be a human body, such as a diners; or an object, such as a cart for finishing a table residue
  • the target object herein embodied as a dining table
  • the monitoring detection system can activate the image acquisition system for image acquisition, and then use the recognition system to perform object recognition on the collected image, etc., thereby determining whether the object far from the table is a preset object (here Turned into a human body): If the object far from the table is a human body, it means that the diners have no need to use the table, and it can be understood that the target object enters the exit state, and the deduction can be further performed according to the business process corresponding to the departure state; On the other hand, if the object far from the table is not the human body, it means that the diners do not leave the table, which can be understood as no need to enter the business system to adjust the business process.
  • a preset object here Turned into a human body
  • the distance between the object and the target object is greater than the fifth preset threshold in the second preset time period.
  • the statistical value of the distance in the second preset time period is greater than the fifth preset threshold. It can be understood that, within a certain time window, the distance between the object and the target object is far enough, and the monitored object may no longer be Need to use the business corresponding to the target object. Therefore, the image acquisition system can be activated to perform image acquisition, and then the object recognition is performed according to the collected image to determine whether the preset object is included in the image collection area: if the image capture area still contains the preset object, it can be understood as a preset object.
  • the target object Relatively close to the target object, the target object is still in the state of admission; if the preset object is not included in the image acquisition area, it can be understood that the preset object is far away from the target object, and the target object can be considered to be in the exit state.
  • the first preset condition used by the monitoring detection system to determine whether to activate the image acquisition system may be a combination of the foregoing multiple conditions. Whether the distance changes greatly (distance difference is greater than a certain threshold), the distance is far (the distance value is greater than a certain threshold, or the mean or median in the time window is greater than a certain threshold), or the distance is closer (the distance value is less than A certain threshold, or the mean or median of the time window is less than a certain threshold, may require activation of the image acquisition system for image acquisition.
  • an image acquisition system can be implemented by using an image capture device such as a camera, an HD camera, or an infrared camera.
  • an image capture device such as a camera, an HD camera, or an infrared camera.
  • the specific types, specifications, and models may be determined according to actual application scenarios, which are not limited in this embodiment.
  • the arrangement of the image acquisition device in the image acquisition system is related to the arrangement manner of the ranging sensor, and the distance detection range of the image acquisition system and the ranging sensor has a corresponding relationship.
  • the image acquisition range of the image acquisition system and the distance detection range of the ranging sensor should have more intersections, and the effect achieved is that when the distance between the object and the target object is monitored within the distance monitoring range, the distance is satisfied.
  • the image acquisition system corresponding to the distance detection range is activated, so that the image acquisition system can capture images in the image acquisition area.
  • the images in the collected image acquisition area are often included. The object being listened to (unless the object being listened to while the image was captured has left the image capture area).
  • the distance detection range 506 of the ultrasonic ranging sensor 502 has more intersection with the image acquisition area 505 of the image acquisition system 503 (which may be a camera).
  • the position and angle of the camera should be such that the dinator is still in the image acquisition area after sitting down.
  • the correspondence between the image acquisition system and the range detection range of the ranging sensor may be one-to-one correspondence, one-to-many or many-to-one.
  • the camera used in the image acquisition system can be either fixed angle or can be adjusted under the control of the monitoring system. For example, when the ranging sensor monitors an object whose distance meets the preset requirement within the distance detection range, the camera is activated, and the camera adjustment angle is controlled until the coincidence degree of the distance detection range between the image acquisition area and the ranging sensor satisfies the requirement.
  • the camera when it is determined that the image acquisition system needs to be activated, the camera may be directly activated, and the camera may be controlled to acquire an image; or the camera may be started when a certain condition is met, and the camera is at the start after the camera is started.
  • the standby mode and when the other condition is satisfied, the camera is controlled to switch to the working mode, and the image in the image capturing area is acquired.
  • the first instruction sent by the monitoring detection system to the image acquisition system is used to activate the image acquisition system.
  • the image acquisition system may acquire an image in the image acquisition area of the image acquisition system directly or under certain conditions.
  • the image acquisition system may directly send the collected image to the identification system, or return the image to the monitoring detection system, and the monitoring detection system sends the image in the image collection area to the recognition system for object recognition.
  • the monitoring detection system may send an image in the image collection area to the object recognition system, and the object recognition system performs object recognition on the image in the image collection area to obtain a recognition result; then, the monitoring detection system receives the object recognition system and returns The result is recognized, and step S105 is performed.
  • the identification system for object recognition may be disposed locally on the target object or may be set as a remote cloud recognition system.
  • the remote cloud recognition system multiple target objects can use the common cloud recognition system for object recognition, which helps to reduce the deployment cost of the entire application system.
  • the algorithm for identifying the object for recognition by the recognition system can adopt a general target detection algorithm such as YOLO (You Only Look Once), fast RCNN, and SSD.
  • the recognition model for different target objects can be trained by using different training images. The construction and training of the model can be performed by a common method.
  • S105 Determine a state of the target object according to the recognition result obtained by performing object recognition on the image in the image capturing area, and the state of the target object includes an entry state and/or an exit state.
  • the recognition result may include at least one of the following:
  • the number of preset objects contained in the image in the image acquisition area is the number of preset objects contained in the image in the image acquisition area.
  • the following may include at least one of the following according to different contents of the recognition result:
  • the image in the image collection area includes a preset object, determining that the state of the target object is an entry state;
  • determining the state of the target object is an entry state
  • the monitoring detection system does not distinguish the specific type of the object when performing the distance monitoring, but only determines whether to activate the image acquisition system according to the distance between the object and the target object.
  • the state of the target object may be further determined as an entry state or an exit. status. Based on this, the state of the target object can be sent to the business system for the business system to determine the business process corresponding to the state of the target object.
  • the second detection command is sent by the monitoring detection system to the image acquisition system corresponding to the distance detection range to close the image.
  • the second instruction may be sent when the distance between the monitored object and the target object tends to be stable. Specifically, when the distance between the object and the target object satisfies the second preset condition, the second instruction may be sent to the image collection system corresponding to the distance detection range; wherein the second preset condition includes: the object and the target object The difference between the distances between the second preset time intervals is less than the sixth preset threshold.
  • the state of the target object may also be recorded, and the first preset condition when determining whether to activate the image acquisition system may be further determined according to the current state of the target object.
  • the monitoring detection system only needs to check whether an object that may change the state of the target object appears, thereby determining whether to activate the first preset condition when the image acquisition system is activated, and only selecting the target object that may be changed.
  • the condition of the state can be.
  • the first preset condition may be taken as at least the following One:
  • the difference between the distance between the object and the target object at the first preset time interval is greater than a third preset threshold
  • the distance between the object and the target object is greater than a fourth preset threshold
  • the statistical value of the distance between the object and the target object in the second preset time period is greater than the fifth preset threshold.
  • the first preset condition may be taken as the following. At least one item:
  • the distance between the object and the target object is less than a first preset threshold
  • the statistical value of the distance between the object and the target object in the first preset time period is less than the second preset threshold
  • the difference between the object and the target object at the first preset time interval is greater than the third predetermined threshold.
  • an image acquisition system may be used to acquire an image in an image collection area, and then the collected image is subjected to object recognition, thereby determining a state of the target object according to the recognition result. Therefore, it is possible to more accurately determine whether the preset object is admitted and/or departed.
  • the image acquisition system is activated to acquire the image in the image acquisition area. Therefore, the system power consumption can be effectively reduced to meet the application requirements.
  • the embodiment of the present application further provides an inbound and outbound state detection method, which is performed by an image acquisition system, and the method may include:
  • S201 Receive a first instruction, where the first instruction is sent by the monitoring detection system when the distance between the object and the target object satisfies the first preset condition, and the object is located within the distance detection range of the monitoring detection system, and the distance detection range and the image acquisition system Corresponding;
  • S203 Acquire an image in an image acquisition area of the image acquisition system, so that the monitoring detection system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image collection area, and the state of the target object includes an admission state and/or Or departure status.
  • the image acquisition system may further perform the following steps:
  • the image acquisition system is turned off or switched to the standby mode.
  • an image acquisition system may be used to acquire an image in an image collection area, and then the collected image is subjected to object recognition, thereby determining a state of the target object according to the recognition result. Therefore, it is possible to more accurately determine whether the preset object is admitted and/or departed.
  • the image acquisition system is activated to acquire the image in the image acquisition area. Therefore, the system power consumption can be effectively reduced to meet the application requirements.
  • the target object is taken as a dining table
  • the business system can be embodied as a multimedia interactive system.
  • the interactive system can be mainly composed of a motion collector, a data processor and a display screen.
  • the hardware carrier of the interactive system can be disposed on the periphery of the dining table to facilitate the operation and viewing of the diners, or directly on the ordinary dining table as a display screen carrier, by deploying a touch screen, a gesture recognition device, etc. on an ordinary dining table.
  • the action collector of the user ie, the diners
  • the table desktop is used as the data processing result of the screen display feedback interaction system to realize the intelligentization of the table, and the interactive interaction between the diners and the business system is completed by the intelligent table.
  • the business system can enter the ordering process.
  • the menu can be displayed on the touch screen embedded in the desktop, and the diners select the corresponding dishes by clicking the touch screen, complete the series of operations such as self-ordering, adding vegetables, and even can view the real-time progress of the dishes through the screen, and view the cooking process of the dishes. and many more.
  • the intelligent table can also record the diners' identification information and the frequently-used dishes, and then provide personalized recommendation information for the diners.
  • the business system can enter the debit process. Specifically, the touch screen can be turned off, and the automatic deduction can be implemented according to the amount of the bill of the diners according to the identity information (such as account information, identity ID, and the like) provided by the diners.
  • the business system can also enter the reminder process, for example, reminding service personnel to clean the table and so on.
  • the embodiment of the present application further provides an entry and exit state detecting device, which is applied to the monitoring and detecting system 100.
  • the device includes:
  • the distance monitoring module 101 monitors the distance between the object within the distance detection range and the target object
  • the first instruction sending module 103 sends a first instruction to the image acquisition system corresponding to the distance detection range when the distance between the object and the target object satisfies the first preset condition, so as to activate the image acquisition system to acquire the image acquisition area thereof.
  • Image inside a first instruction to the image acquisition system corresponding to the distance detection range when the distance between the object and the target object satisfies the first preset condition, so as to activate the image acquisition system to acquire the image acquisition area thereof.
  • the state determining module 105 determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image capturing area, and the state of the target object includes an entry state and/or an exit state.
  • the entry and exit state detecting device in this embodiment corresponds to the inbound and outbound state detecting method performed by the monitoring and detecting system in the foregoing embodiment, and the related content in the foregoing embodiment is applicable to the embodiment, and details are not described herein again.
  • the embodiment of the present application further provides an entry and exit state detecting device, which is applied to the image capturing system 200.
  • the device includes:
  • the first instruction receiving module 201 receives the first instruction sent by the monitoring detection system when the distance between the object and the target object satisfies the first preset condition; the object is located within the distance detection range of the monitoring detection system, the distance detection range and the image collection Corresponding to the system;
  • the image acquiring module 203 acquires an image in the image capturing area of the image capturing system, so that the monitoring and detecting system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image capturing area, and the state of the target object includes the admission. Status and/or departure status.
  • the entry and exit state detecting device in this embodiment corresponds to the method of detecting the inbound and outbound state performed by the image collecting system in the foregoing embodiment, and the related content in the foregoing embodiment is applicable to the embodiment, and details are not described herein again.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor, optionally including an internal bus, a network interface, and a memory.
  • the memory may include a memory, such as a high-speed random access memory (RAM), and may also include a non-volatile memory, such as at least one disk memory.
  • RAM high-speed random access memory
  • non-volatile memory such as at least one disk memory.
  • the electronic device may also include hardware required for other services.
  • the processor, the network interface, and the memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, or an EISA (Extended) Industry Standard Architecture, extended industry standard architecture) bus.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one double-headed arrow is shown in Figure 7, but it does not mean that there is only one bus or one type of bus.
  • the program can include program code, the program code including computer operating instructions.
  • the memory can include both memory and non-volatile memory and provides instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then operates to form an entry and exit state detecting device on the logical level.
  • the processor executes the program stored in the memory and is specifically used to perform the following operations:
  • the state of the target object is determined based on the recognition result obtained by performing object recognition on the image in the image acquisition area, and the state of the target object includes an entry state and/or an exit state.
  • the method performed by the entry and exit state detecting apparatus disclosed in the embodiment shown in FIG. 2 of the present application may be applied to a processor or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processor (DSP), dedicated integration.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory, and the processor reads the information in the memory and combines the hardware to complete the steps of the above method.
  • the electronic device can also perform the method performed by the entry and exit state detecting device in FIG. 1 and realize the function of the entry and exit state detecting device in the embodiment shown in FIG. 1.
  • the embodiment of the present application does not repeat here.
  • the embodiment of the present application further provides a computer readable storage medium storing one or more programs, the one or more programs including instructions that are executed by an electronic device including a plurality of applications
  • the electronic device can be configured to perform the method performed by the entry and exit state detecting device in the embodiment shown in FIG.
  • the state of the target object is determined based on the recognition result obtained by performing object recognition on the image in the image acquisition area, and the state of the target object includes an entry state and/or an exit state.
  • an image acquisition system may be used to acquire an image in an image collection area, and then the collected image is subjected to object recognition, thereby determining a state of the target object according to the recognition result. Therefore, it is possible to more accurately determine whether the preset object is admitted and/or departed.
  • the image acquisition system is activated to acquire the image in the image acquisition area. Therefore, the system power consumption can be effectively reduced to meet the application requirements.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes a processor, optionally including an internal bus, a network interface, and a memory.
  • the memory may include a memory, such as a high-speed random access memory (RAM), and may also include a non-volatile memory, such as at least one disk memory.
  • RAM high-speed random access memory
  • non-volatile memory such as at least one disk memory.
  • the electronic device may also include hardware required for other services.
  • the processor, the network interface, and the memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, or an EISA (Extended) Industry Standard Architecture, extending the industry standard structure) bus.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one double-headed arrow is shown in Figure 8, but it does not mean that there is only one bus or one type of bus.
  • the program can include program code, the program code including computer operating instructions.
  • the memory can include both memory and non-volatile memory and provides instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then operates to form an off-field condition monitoring device at a logical level.
  • the processor executes the program stored in the memory and is specifically configured to perform the following operations:
  • the object is located within the distance detection range of the monitoring detection system, and the distance detection range corresponds to the image acquisition system ;
  • the monitoring detection system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image acquisition area, and the state of the target object includes an entry state and/or a departure state. Field status.
  • the method performed by the entry and exit state monitoring device disclosed in the embodiment shown in FIG. 4 of the present application may be applied to a processor or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processor (DSP), dedicated integration.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • other programmable logic device discrete gate or transistor logic device, discrete hardware component.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory, and the processor reads the information in the memory and combines the hardware to complete the steps of the above method.
  • the electronic device can also perform the method performed by the entry and exit state monitoring device in FIG. 4, and realize the function of the entry and exit state monitoring device in the embodiment shown in FIG. 4, which is not described herein again.
  • the embodiment of the present application further provides a computer readable storage medium storing one or more programs, the one or more programs including instructions that are executed by an electronic device including a plurality of applications
  • the electronic device can be configured to perform the method performed by the entry and exit state monitoring device in the embodiment shown in FIG. 4, and specifically for performing:
  • the object is located within the distance detection range of the monitoring detection system, and the distance detection range corresponds to the image acquisition system ;
  • the monitoring detection system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image acquisition area, and the state of the target object includes an entry state and/or a departure state. Field status.
  • an image acquisition system may be used to acquire an image in an image collection area, and then the collected image is subjected to object recognition, thereby determining a state of the target object according to the recognition result. Therefore, it is possible to more accurately determine whether the preset object is admitted and/or departed.
  • the image acquisition system is activated to acquire the image in the image acquisition area. Therefore, the system power consumption can be effectively reduced to meet the application requirements.
  • the embodiment of the present application further provides an application system, including a monitoring detection system, an image acquisition system, an object recognition system, and a service system, where:
  • the monitoring detection system monitors the distance between the object within the distance detection range and the target object; and when the distance between the object and the target object satisfies the first preset condition, sends the first to the image acquisition system corresponding to the distance detection range
  • the image acquisition system receives the first instruction, and the first instruction is sent by the monitoring detection system when the distance between the object and the target object satisfies the first preset condition, and the object is located within the distance detection range of the monitoring detection system, and the distance detection range and the image
  • the acquisition system corresponds to; the image in the image acquisition area of the image acquisition system is also acquired, so that the monitoring detection system determines the state of the target object according to the recognition result obtained by performing object recognition on the image in the image acquisition area, and the state of the target object includes Field state and/or departure state;
  • the object recognition system receives an image in the image acquisition area, and performs object recognition on the image in the image acquisition area to obtain a recognition result; and returns a recognition result;
  • the business system receives the state of the target object and determines the business process corresponding to the state of the target object.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Alarm Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种入离场状态检测方法,由监听检测系统执行,所述方法包括:监听距离检测范围内的对象与目标对象之间的距离(S101);当所述对象与目标对象之间的距离满足第一预设条件时(S107),向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像(S103);根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态(S105)。该方法能够较为准确的判断预设对象是否入场和/或离场,并且可以有效降低系统功耗,满足应用需求。

Description

入离场状态检测方法和装置
相关申请的交叉引用
本专利申请要求于2018年2月8日提交的、申请号为201810127142.2、发明名称为“入离场状态检测方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及计算机技术领域,尤其涉及入离场状态检测方法和装置。
背景技术
随着计算机技术的发展,各种应用场景的智能化水平也越来越高。
在某些应用场景中,例如自助化餐厅,自动售货柜,自动门禁等,需要监听系统能够自动判断预设对象是否入场或者离场,以便业务系统能够根据预设对象的入场和/或离场状态提供相适应的服务。
因此,亟需一种能够自动检测预设对象入离场状态的方法。
发明内容
本申请实施例提供了入离场状态检测方法和装置,以及相应的应用系统,能够自动检测预设对象入离场状态。
本申请实施例采用下述技术方案:
第一方面,本申请实施例提供了一种入离场状态检测方法,由监听检测系统执行,所述方法包括:
监听距离检测范围内的对象与目标对象之间的距离;
当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对 象的状态,所述目标对象的状态包括入场状态和/或离场状态。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,所述第一预设条件包括以下至少一项:
所述对象与目标对象之间的距离小于第一预设阈值;
所述对象与目标对象之间的距离在第一预设时间段内的统计值小于第二预设阈值;
所述对象与目标对象之间的距离在第一预设时间间隔的差值大于第三预设阈值;
所述对象与目标对象之间的距离大于第四预设阈值;
所述对象与目标对象之间的距离在第二预设时间段内的统计值大于第五预设阈值。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,在根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态之前,所述方法还包括:
将所述图像采集区域内的图像发送至对象识别系统,供所述对象识别系统对所述图像采集区域内的图像进行对象识别,得到所述识别结果;
接收所述对象识别系统返回的所述识别结果。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,所述对象识别系统为云端对象识别系统。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,所述识别结果包括以下至少一项:
所述图像采集区域内的图像中是否包含预设对象的判断结论;
所述图像采集区域内的图像中包含的预设对象的数量。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,包括以下至少一项:
当所述图像采集区域内的图像中包含所述预设对象时,确定所述目标对象的状态为入场状态;
当所述图像采集区域内的图像中不包含所述预设对象时,确定所述目标对象的状态 为离场状态;
当所述图像采集区域内的图像中包含的所述预设对象的数量大于零时,确定所述目标对象的状态为入场状态;
当所述图像采集区域内的图像中包含的所述预设对象的数量为零时,确定所述目标对象的状态为离场状态。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,在确定所述目标对象的状态之后,所述方法还包括:
将所述目标对象的状态发送至业务系统,供所述业务系统确定与所述目标对象的状态相对应的业务流程。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,在确定所述目标对象的状态之后,所述方法还包括:
向与所述距离检测范围相对应的图像采集系统发送第二指令,以便关闭所述图像采集系统或者将所述图像采集系统切换为待机模式。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,向与所述距离检测范围相对应的图像采集系统发送第二指令,包括:
当所述对象与目标对象之间的距离满足第二预设条件时,向与所述距离检测范围相对应的图像采集系统发送所述第二指令;
其中,所述第二预设条件包括:所述对象与目标对象之间的距离在第二预设时间间隔的差值小于第六预设阈值。
可选的,本申请实施例提供的第一方面入离场状态检测方法中,在确定所述目标对象的状态之后,所述方法还包括:
记录所述目标对象的状态;
根据所述目标对象的状态,确定所述第一预设条件。
第二方面,本申请实施例提供一种入离场状态检测方法,由图像采集系统执行,所述方法包括:
接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测 范围与所述图像采集系统相对应;
获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
可选的,本申请实施例提供的第二方面入离场状态检测方法中,在获取所述图像采集系统的图像采集区域内的图像之后,所述方法还包括:
接收第二指令,所述第二指令由所述监听检测系统在确定所述目标对象的状态之后发送;
根据所述第二指令,所述图像采集系统关闭或者切换为待机模式。
第三方面,本申请实施例提供一种入离场状态检测装置,应用于监听检测系统,所述装置包括:
距离监听模块,监听距离检测范围内的对象与目标对象之间的距离;
第一指令发送模块,当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
状态确定模块,根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第四方面,本申请实施例提供一种入离场状态检测装置,应用于图像采集系统,所述装置包括:
第一指令接收模块,接收监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送的第一指令;所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
图像获取模块,获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第五方面,本申请实施例提供一种电子设备,包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
监听距离检测范围内的对象与目标对象之间的距离;
当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第六方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:
监听距离检测范围内的对象与目标对象之间的距离;
当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第七方面,本申请实施例提供一种电子设备,包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第八方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:
接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
第九方面,本申请实施例提供一种应用系统,包括监听检测系统、图像采集系统、对象识别系统、业务系统,其中:
所述监听检测系统,监听距离检测范围内的对象与目标对象之间的距离;还当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;还根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态;
所述图像采集系统,接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;还获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态;
所述对象识别系统,接收所述图像采集区域内的图像,并对所述图像采集区域内的图像进行对象识别,得到所述识别结果;还返回所述识别结果;
所述业务系统,接收所述目标对象的状态,并确定与所述目标对象的状态相对应的业务流程。
本申请实施例采用的上述至少一个技术方案能够达到以下有益效果:
本申请实施例中,可以利用图像采集系统获取图像采集区域内的图像,进而对采集 到的图像进行对象识别,从而根据识别结果确定目标对象的状态。因此,能够较为准确的判断预设对象是否入场和/或离场。与此同时,通过监听距离检测范围内的对象与目标对象之间的距离,且仅当监听到的距离满足第一预设条件时,才激活图像采集系统以获取其图像采集区域内的图像,因此,可以有效降低系统功耗,满足应用需求。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请实施例所适用的应用系统的架构示意图;
图2为本申请实施例中由监听检测系统执行的入离场状态检测方法的流程示意图;
图3为本申请实施例应用于自助化餐厅这一场景的实施示意图;
图4为本申请实施例中由图像采集系统执行的入离场状态检测方法的流程示意图;
图5为本申请实施例中应用于监听检测系统的入离场状态检测装置的结构示意图;
图6为本申请实施例中应用于图像采集系统的入离场状态检测装置的结构示意图;
图7为本申请实施例中一种电子设备的结构示意图;
图8为本申请实施例中又一种电子设备的结构示意图;
图9为本申请实施例中应用系统的入离场状态检测方法的流程示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图1给出了一种能够自动检测预设对象入离场状态的应用系统的架构示意图。能够理解,该应用系统可以应用于多种应用场景。例如,自助化餐厅、自动售货柜、或者自动门禁,等等。
具体地,该应用系统可以包括监听检测系统100、图像采集系统200、和业务系统300。其中,监听检测系统100可以监听进入距离检测范围的对象与目标对象500之间的距离,以便在距离满足一定条件时激活图像采集系统进行图像采集。图像采集系统200可以在被激活后采集图像采集区域内的图像,以便根据对这些图像进行对象识别的识别结果,确定图像采集区域内是否包含预设对象。监听检测系统100和/或图像采集系统200可以将图像采集区域内的图像发送到识别系统400进行对象识别。若图像采集区域内的图像中包含预设对象,则可判定目标对象处于入场状态;若图像采集区域内的图像中未包含预设对象,则可判定目标对象处于离场状态。在此基础上,目标对象所处的状态信息还可以进一步发送到业务系统300,以便业务系统300根据目标对象的状态确定相对应的业务流程。
需要说明的是,用于对图像进行对象识别的识别系统400,既可以是设置在目标对象本地的识别系统,也可以是设置在远程的云端识别系统。
参见图2所示,由监听检测系统执行的入离场状态检测方法,可以具体包括以下步骤:
S101:监听距离检测范围内的对象与目标对象之间的距离。
在步骤S101中,监听检测系统可以采用距离传感模块,实时检测出现在该距离传感模块的距离检测范围内的对象与目标对象之间的距离。具体地,可以将距离传感模块布设在目标对象处,通过检测上述对象与距离传感模块之间的距离,获得该对象与目标对象之间的距离。
可选的,距离传感模块可以采用超声波测距传感器、激光测距传感器、红外测距传感器等中的一种或者多种,只要能够满足距离监听的精度和应用场景的具体要求即可。
超声波测距传感器,包含用于发射超声波的发射单元和用于接收超声波回波的接收单元,可以利用超声波回波测距原理进行两个对象之间的距离检测。发射出的超声波在遇到遮挡对象(可以是物体或者人体)后会回弹返回,因此,超声波测距传感器可以利用发射超声波和接收超声波回波的时间差来计算超声波所经过的距离,进而得到遮挡对象距离超声波测距传感器的距离。超声波测距传感器具有盲区小,测量准,无需接触,以及低成本等优点。
在具体实施时,可以将超声波测距传感器布设在目标对象上,以便监听处于距离检测范围内的对象与目标对象之间的距离。调整超声波测距传感器的具体布设位置和方向, 使得发射单元向某一方向发射超声波,并在发射时刻开始计时。由于超声波在空气中传播时碰到障碍就会立刻返回,因此,接收单元收到反射波(相当于超声波回波)后即停止计时。假设超声波传播速度为v,发射单元发出超声和接收单元收到超声的时间差为t,则发射点(相当于目标对象所在位置)距离障碍物(相当于监听到的对象)的距离可以表示为:S=v*t/2。虽然超声波的传播速度和温度有关,温度变化5摄氏度时,速度变化不到1%,因而在温度变化不大的情况下,可以认为声速固定。这一精度通常足以满足室内自助化餐厅、自动售货柜等应用场景的需求。
除了超声波测距传感器,也可以采用激光测距传感器和/或红外测距传感器等进行距离的测量和监听。光测距的原理与声波测距的原理相似,区别主要在于依赖的是发射光与接收光的时间差。由于激光测距传感器所依赖的光波受太阳光等光线影响较大,在白天可能更容易受到干扰,因此,激光测距传感器更适合在夜晚使用,例如,夜间自助门禁等场景中。在光照不充足时,也可以选择红外测距传感器,以达到较好测距精度。
能够理解到,多种测距传感器可以组合使用,以便满足不同测量精度和应用场景的需求,本申请实施例对此不做限定。
以将上述监听检测系统应用于自助化餐厅这一应用场景为例,目标对象可以是餐厅内的餐桌501(也可以是餐厅内的自助餐柜),参见图3所示。通过检测人体(可以理解为,预设对象为人体)是否靠近或者远离餐桌,确定是否启动相对应的业务流程。
执行步骤S101之前,可以将超声波测距传感器502(也可以是其他类型的测距传感器)布设在目标对象(即餐桌501)上,发射单元发射超声波的方向调整为人体最有可能靠近或者远离餐桌的方向,例如,可以布设在餐桌的四周。可选的,可将超声波测距传感器502安装在餐桌的侧面,水平方向发射超声波,则当人体靠近或者远离餐桌时,都将进入超声波测距传感器502的距离检测范围506。能够理解,为了确保人体从各个方向靠近或者远离餐桌时都能被监听到,可以在餐桌的四周均布设测距传感器。
在某些情况下,多个测距传感器发射的信号(声波或者光波)可能会存在干扰,例如,位于过道两侧的餐桌上所布设的测距传感器,其距离检测范围很可能会有交叉,一个测距传感器发射的超声波可能被另一个测距传感器接收,从而影响到对距离测量的准确性。为避免这种干扰,可以采用多种方式,例如,可以控制多个测距传感器轮流发射信号;又例如,也可以在进行距离判断的时候,自动放弃距离超过一定阈值的检测值;再例如,还可以根据用户(此处可具体为就餐者)的预约情况确定当前时刻发射信号的 测距传感器,例如,用户在进入自助化餐厅时进行扫码预约,确定准备就餐的餐桌编号(可具体化为目标对象的ID),则仅激活与餐桌编号相对应的餐桌发射信号即可。
S103:当对象与目标对象之间的距离满足第一预设条件时,向与距离检测范围相对应的图像采集系统发送第一指令,以便激活图像采集系统获取其图像采集区域内的图像。
通过执行步骤S101能够监听到对象与目标对象之间的距离。在执行步骤S103之前,可以预先确定激活图像采集系统的条件,也就是第一预设条件,并进而执行步骤S107,判断对象与目标对象之间的距离是否满足第一预设条件。若满足第一预设条件,则进一步执行步骤S103;若不满足第一预设条件,则返回步骤S101继续监听。
在不同的应用场景下,第一预设条件的内涵可以不同。例如,第一预设条件可以包括以下至少一项:
(1)对象与目标对象之间的距离小于第一预设阈值。
这一预设条件可以理解为,距离小于第一预设阈值,表示对象与目标对象之间的距离已足够接近,监听到的对象有可能需要使用目标对象所对应的业务了。例如,在自助化餐厅这一场景下,当有对象(可能是人体,例如就餐者;也可能是物体,例如整理餐桌残渣的推车)接近目标对象(此处具体化为餐桌)时,表示可能有就餐者需要在此餐桌用餐。在这种情况下,监听检测系统就可以激活图像采集系统进行图像采集,进而利用识别系统对采集到的图像进行对象识别等方式,即可判断接近餐桌的对象是否为预设对象(此处具体化为人体):若接近餐桌的对象是人体,则表示可能有就餐者使用该餐桌,可理解为该目标对象进入入场状态,则可进一步进入业务系统进行点餐等业务流程;反之,若接近餐桌的对象不是人体,则表示并未有就餐者要使用该餐桌,可理解为该目标对象处于离场状态,无需进入业务系统。
(2)对象与目标对象之间的距离在第一预设时间段内的统计值小于第二预设阈值。
由于环境信号干扰或者测距传感器本身的误差,监听到的对象与目标对象之间的距离的数值可能会出现一些毛刺信号,影响判断结果。因此,可以计算对象与目标对象之间的距离在某一时间段(例如,第一预设时间段)内的统计值,用统计值来反映距离在这一时间窗口(即第一预设时间段)内的距离测量的整体情况,从而可以消除毛刺信号对判断结果的影响。可选的,统计值可以取为第一预设时间段内距离测量值的平均值或者中位数(又称为中值)。
距离在第一预设时间段内的统计值小于第二预设阈值,可以理解为,在某一时间窗口内,对象与目标对象之间的距离已足够接近,监听到的对象有可能需要使用目标对象所对应的业务了。以自助货柜这一应用场景为例,当有对象(可以设为人体,例如货柜管理人员;也可能是物体,例如集装箱等)接近目标对象(此处具体化为货柜)时,表示可能需要在此货柜加载集装箱。在这种情况下,监听检测系统就可以激活图像采集系统进行图像采集,进而利用识别系统对采集到的图像进行对象识别等方式,即可判断接近货柜的对象是否为预设对象(此处具体化为集装箱):若接近货柜的对象是集装箱,则表示需要加载集装箱,可理解为该目标对象处于入场状态,则可进一步进入业务系统进行入库、装卸集装箱等业务流程;反之,若接近货柜的对象不是集装箱,则表示并不需要加载集装箱,可理解为该目标对象处于离场状态,无需进入业务系统。
(3)对象与目标对象之间的距离在第一预设时间间隔的差值大于第三预设阈值。
这一预设条件可以理解为,如果被监听到的对象(可以理解为,位于距离检测范围内的对象)与目标对象之间的距离保持稳定,在某一时间间隔(例如,第一预设时间间隔)内的变化值足够小(例如,不大于第三预设阈值),表示对象很可能并没有移动,或者说,移动幅度没有达到预设的程度。在这种情况下,可以认为目标对象的入离场状态并未改变。例如,在自助化餐厅这一场景中,就餐者走近餐桌,坐在餐桌前用餐的过程中,就餐者与餐桌之间的距离通常较小,且基本保持稳定。可以理解到,在就餐者结束就餐、离开餐桌前,餐桌将始终处于入场状态。因此,就餐者与餐桌之间的距离变化足够小时,监听检测系统并不需要激活图像采集系统进行图像采集,进而判断餐桌的入离场状态。
因此,即使被监听到的对象与目标对象之间的距离足够近或者足够远,只要上述距离的变化值足够小,也可能不需要激活图像采集系统,也就不需要进行对象识别,不需要改变业务系统的业务流程。
可以理解到,本项条件往往可以与其他条件组合使用,从而避免在目标对象的入离场状态不会改变的情况下,对图像采集系统的频繁激活,有利于进一步降低系统功耗。
(4)对象与目标对象之间的距离大于第四预设阈值。
这一预设条件可以理解为,距离大于第四预设阈值,表示对象与目标对象之间的距离已足够远,监听到的对象有可能已经不需要使用目标对象所对应的业务了。例如,在自助化餐厅这一场景下,当有对象(可能是人体,例如就餐者;也可能是物体,例如整 理餐桌残渣的推车)远离目标对象(此处具体化为餐桌)时,表示可能有就餐者用餐完毕,离开餐桌。在这种情况下,监听检测系统就可以激活图像采集系统进行图像采集,进而利用识别系统对采集到的图像进行对象识别等方式,即可判断远离餐桌的对象是否为预设对象(此处具体化为人体):若远离餐桌的对象是人体,则表示用餐者已无需使用该餐桌,可理解为该目标对象进入离场状态,则可进一步按照与离场状态对应的业务流程进行扣款;反之,若远离餐桌的对象不是人体,则表示就餐者并不离开该餐桌,可理解为无需进入业务系统调整业务流程。
(5)对象与目标对象之间的距离在第二预设时间段内的统计值大于第五预设阈值。
通过对预设的时间窗口(相当于第二预设时间段)内距离的统计值进行考察,能够避免测距传感器自身原因导致的信号毛刺对判断结果的影响。
距离在第二预设时间段内的统计值大于第五预设阈值,可以理解为,在某一时间窗口内,对象与目标对象之间的距离已足够远,监听到的对象有可能不再需要使用目标对象所对应的业务了。因此,可以激活图像采集系统进行图像采集,并进而根据采集到的图像进行对象识别,确定图像采集区域内是否包含预设对象:若图像采集区域内仍然包含预设对象,可以理解为预设对象比较接近目标对象,可认为目标对象仍处于入场状态;若图像采集区域内已不包含预设对象,可以理解为预设对象已远离目标对象,可认为目标对象处于离场状态。
以上举例介绍了第一预设条件的几种情况。需要说明的是,监听检测系统在判断是否激活图像采集系统时所采用的第一预设条件,可以是上述多项条件的组合。无论是距离变化较大(距离差值大于某一阈值)、距离较远(距离数值大于某一阈值,或者时间窗口内的均值或中值大于某一阈值)、或者距离较近(距离数值小于某一阈值,或者时间窗口内的均值或中值小于某一阈值),都可能需要激活图像采集系统进行图像采集。
可选的,可以采用摄像头、高清摄像头、红外摄像头等图像采集装置实现图像采集系统。具体种类、规格、型号的选取,可以根据实际应用场景确定,本申请实施例对此不做限定。
可选的,图像采集系统中图像采集装置的布设,与测距传感器的布设方式是有关联的,图像采集系统与测距传感器的距离检测范围具有对应关系。具体地,可以理解为,图像采集系统的图像采集范围与测距传感器的距离检测范围应该有较多的交集,达到的效果在于:当在距离监测范围内监听到对象与目标对象的距离满足第一预设条件时,激 活与该距离检测范围相对应的图像采集系统,使得该图像采集系统能够采集其图像采集区域内的图像,显然,采集到的图像采集区域内的图像中,往往会包含被监听到的对象(除非采集图像时被监听到的对象已离开该图像采集区域)。
以图3所示自助化餐厅这一场景为例,超声波测距传感器502的距离检测范围506,与图像采集系统503(可以是摄像头)的图像采集区域505有较多交集。在这一场景下的一种优选方式在于,摄像头的布设位置和角度,应使得就餐者在坐下后,头顶仍然处于图像采集区域内。
可选的,图像采集系统与测距传感器的距离检测范围的对应关系,既可以是一一对应的,也可以是一对多或者多对一的。图像采集系统中所采用的摄像头,既可以是固定角度的,也可以是能够在监听检测系统的控制下调整角度的。例如,测距传感器在其距离检测范围内监听到距离满足预设要求的对象时,激活摄像头,并控制摄像头调整角度,直至图像采集区域与该测距传感器的距离检测范围的重合度满足要求。
可选的,在根据对象与目标对象之间的距离,经过判断需要激活图像采集系统时,可以直接启动摄像头,并控制摄像头采集图像;也可以在满足一定条件时先启动摄像头,启动后摄像头处于待机模式,而在满足又一条件时再控制摄像头切换至工作模式,采集图像采集区域内的图像。
监听检测系统向图像采集系统发送的第一指令,用于激活图像采集系统。图像采集系统接收到第一指令后,可以直接、也可以在满足一定条件下获取图像采集系统的图像采集区域内的图像。可选的,图像采集系统可以直接将采集到的图像发送至识别系统,也可以将图像返回至监听检测系统后,由监听检测系统将图像采集区域内的图像发送至识别系统进行对象识别。具体地,监听检测系统可以将图像采集区域内的图像发送至对象识别系统,供对象识别系统对图像采集区域内的图像进行对象识别,得到识别结果;然后,监听检测系统接收对象识别系统返回的识别结果,进而执行步骤S105。
可选的,用于进行对象识别的识别系统,既可以布设在目标对象本地,也可以布设为远程的云端识别系统。采用远程的云端识别系统,多个目标对象可以采用共同的云端识别系统进行对象识别,有利于降低整个应用系统的布设成本。
能够理解到,识别系统进行对象识别的算法,采用通用的YOLO(You Only Look Once)、快速RCNN、SSD等目标检测算法均可。根据应用场景的不同,可以采用不同的训练图像训练出针对不同目标对象的识别模型,模型的构建与训练采用通用方法即可, 本申请实施例在此不再赘述。
S105:根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
可选的,识别结果可以包括以下至少一项:
图像采集区域内的图像中是否包含预设对象的判断结论;
图像采集区域内的图像中包含的预设对象的数量。
具体执行步骤S105根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态时,根据识别结果的不同内容,可以包括以下至少一项:
当图像采集区域内的图像中包含预设对象时,确定目标对象的状态为入场状态;
当图像采集区域内的图像中不包含预设对象时,确定目标对象的状态为离场状态;
当图像采集区域内的图像中包含的预设对象的数量大于零时,确定目标对象的状态为入场状态;
当图像采集区域内的图像中包含的预设对象的数量为零时,确定目标对象的状态为离场状态。
能够理解,监听检测系统在进行距离监听时,并不分辨对象的具体类型,而仅仅根据对象与目标对象之间的距离判断是否激活图像采集系统。在图像采集系统采集到图像采集区域内的图像后,根据图像内是否包含预设对象,和/或图像内包含的预设对象的数量,可以进一步判断目标对象的状态为入场状态还是离场状态。在此基础上,即可将目标对象的状态发送至业务系统,供业务系统确定与目标对象的状态相对应的业务流程。
为了进一步降低图像采集系统所带来的功耗,可以在采集完图像,且确定目标对象的状态之后,由监听检测系统向与距离检测范围相对应的图像采集系统发送第二指令,以便关闭图像采集系统或者将图像采集系统切换为待机模式。
除此之外,也可以在监听到的对象与目标对象的距离趋于稳定时发送上述第二指令。具体地,可以在对象与目标对象之间的距离满足第二预设条件时,向与距离检测范围相对应的图像采集系统发送第二指令;其中,第二预设条件包括:对象与目标对象之间的距离在第二预设时间间隔的差值小于第六预设阈值。
可选的,在确定目标对象的状态之后,还可以记录目标对象的状态,并可进一步根 据目标对象的当前状态,确定判断是否激活图像采集系统时的第一预设条件。可以理解为,监听检测系统只需考察是否出现了可能使得目标对象的状态发生改变的对象即可,从而,判断是否激活图像采集系统时的第一预设条件,也只需选取可能改变目标对象的状态的条件即可。
例如,当记录的目标对象的当前状态为入场状态时,则仅需考察是否出现了可能导致目标对象的状态变为离场状态的情况即可,则第一预设条件可以取为以下至少一项:
对象与目标对象之间的距离在第一预设时间间隔的差值大于第三预设阈值;
对象与目标对象之间的距离大于第四预设阈值;
对象与目标对象之间的距离在第二预设时间段内的统计值大于第五预设阈值。
又例如,当记录的目标对象的当前状态为离场状态时,则仅需考察是否出现了可能导致目标对象的状态变为入场状态的情况即可,则第一预设条件可以取为以下至少一项:
对象与目标对象之间的距离小于第一预设阈值;
对象与目标对象之间的距离在第一预设时间段内的统计值小于第二预设阈值;
对象与目标对象之间的距离在第一预设时间间隔的差值大于第三预设阈值。
本申请实施例中,可以利用图像采集系统获取图像采集区域内的图像,进而对采集到的图像进行对象识别,从而根据识别结果确定目标对象的状态。因此,能够较为准确的判断预设对象是否入场和/或离场。与此同时,通过监听距离检测范围内的对象与目标对象之间的距离,且仅当监听到的距离满足第一预设条件时,才激活图像采集系统以获取其图像采集区域内的图像,因此,可以有效降低系统功耗,满足应用需求。
参见图4所示,本申请实施例还提供了一种入离场状态检测方法,由图像采集系统执行,该方法可以包括:
S201:接收第一指令,第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,对象位于监听检测系统的距离检测范围内,距离检测范围与图像采集系统相对应;
S203:获取图像采集系统的图像采集区域内的图像,以便监听检测系统根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
可选的,图像采集系统在获取图像采集系统的图像采集区域内的图像之后,还可以执行以下步骤:
接收第二指令,第二指令由监听检测系统在确定目标对象的状态之后发送;
根据第二指令,图像采集系统关闭或者切换为待机模式。
能够理解,图像采集系统所执行的步骤,与前述应用系统或者监听检测系统所执行的步骤相对应,前述实施例中与图像采集系统相关的内容均适用于本实施例。此处不再赘述。
本申请实施例中,可以利用图像采集系统获取图像采集区域内的图像,进而对采集到的图像进行对象识别,从而根据识别结果确定目标对象的状态。因此,能够较为准确的判断预设对象是否入场和/或离场。与此同时,通过监听距离检测范围内的对象与目标对象之间的距离,且仅当监听到的距离满足第一预设条件时,才激活图像采集系统以获取其图像采集区域内的图像,因此,可以有效降低系统功耗,满足应用需求。
以将本申请实施例提供的入离场状态检测方法应用于自助化餐厅这一应用场景为例,目标对象取为餐桌,业务系统可以具体体现为一种多媒体互动系统。该互动系统可以主要由动作采集器、数据处理器和显示屏幕三部分组成。可选的,该互动系统的硬件载体可以布设在餐桌周边便于就餐者操作、观看的部位,也可以直接以普通餐桌为显示屏幕载体,通过在普通餐桌上部署触控屏幕、手势识别装置等能够检测用户(即就餐者)操作的动作采集器,并以餐桌桌面作为屏幕显示反馈互动系统的数据处理结果,实现餐桌的智能化,借助智能化餐桌完成就餐者与业务系统的互动交互。
当目标对象(此处具体化为智能化餐桌)处于入场状态时,业务系统可以进入点菜流程。具体的,可以将菜单展示在嵌入桌面的触摸屏幕上,就餐者通过点击触摸屏幕选中对应菜品,完成自主下单、加菜等系列操作,甚至能通过屏幕查看菜品的实时进度,查看菜品烹饪流程,等等。智能化餐桌还可以相对应的记录就餐者的标识信息与常点的菜品,进而可以在后续为就餐者提供个性化的推荐信息。
当目标对象(此处具体化为智能化餐桌)处于离场状态时,业务系统可以进入扣款流程。具体的,可以关闭触摸屏幕,可以根据就餐者之前提供的身份标识信息(例如账号信息、身份ID等),按照就餐者的账单金额实现自动扣款。除此之外,业务系统还可以进入提醒流程,例如,提醒服务人员进行餐桌清理等等。
本申请实施例还提供了一种入离场状态检测装置,应用于监听检测系统100,参见图5所示,该装置包括:
距离监听模块101,监听距离检测范围内的对象与目标对象之间的距离;
第一指令发送模块103,当对象与目标对象之间的距离满足第一预设条件时,向与距离检测范围相对应的图像采集系统发送第一指令,以便激活图像采集系统获取其图像采集区域内的图像;
状态确定模块105,根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
本实施例中的入离场状态检测装置与前述实施例中监听检测系统执行的入离场状态检测方法相对应,前述实施例中相关内容均适用于本实施例,此处不再赘述。
本申请实施例还提供了一种入离场状态检测装置,应用于图像采集系统200,参见图6所示,该装置包括:
第一指令接收模块201,接收监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送的第一指令;对象位于监听检测系统的距离检测范围内,距离检测范围与图像采集系统相对应;
图像获取模块203,获取图像采集系统的图像采集区域内的图像,以便监听检测系统根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
本实施例中的入离场状态检测装置与前述实施例中图像采集系统执行的入离场状态检测方法相对应,前述实施例中相关内容均适用于本实施例,此处不再赘述。
图7是本申请的一个实施例电子设备的结构示意图。请参考图7,在硬件层面,该电子设备包括处理器,可选地还包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture, 扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成入离场状态检测装置。处理器,执行存储器所存放的程序,并具体用于执行以下操作:
监听距离检测范围内的对象与目标对象之间的距离;
当对象与目标对象之间的距离满足第一预设条件时,向与距离检测范围相对应的图像采集系统发送第一指令,以便激活图像采集系统获取其图像采集区域内的图像;
根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
上述如本申请图2所示实施例揭示的入离场状态检测装置执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该电子设备还可执行图1中入离场状态检测装置执行的方法,并实现入离场状态检测装置在图1所示实施例的功能,本申请实施例在此不再赘述。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的电子设备执行时,能够使该电子设备执行图1所示实施例中入离场状态检测装置执行的方法,并具体用于执行:
监听距离检测范围内的对象与目标对象之间的距离;
当对象与目标对象之间的距离满足第一预设条件时,向与距离检测范围相对应的图像采集系统发送第一指令,以便激活图像采集系统获取其图像采集区域内的图像;
根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
本申请实施例中,可以利用图像采集系统获取图像采集区域内的图像,进而对采集到的图像进行对象识别,从而根据识别结果确定目标对象的状态。因此,能够较为准确的判断预设对象是否入场和/或离场。与此同时,通过监听距离检测范围内的对象与目标对象之间的距离,且仅当监听到的距离满足第一预设条件时,才激活图像采集系统以获取其图像采集区域内的图像,因此,可以有效降低系统功耗,满足应用需求。
图8是本申请的一个实施例电子设备的结构示意图。请参考图8,在硬件层面,该电子设备包括处理器,可选地还包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成入离场状态监测装置。处理器,执行存储器所存放的程序,并具体用于执行以 下操作:
接收第一指令,第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,对象位于监听检测系统的距离检测范围内,距离检测范围与图像采集系统相对应;
获取图像采集系统的图像采集区域内的图像,以便监听检测系统根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
上述如本申请图4所示实施例揭示的入离场状态监测装置执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该电子设备还可执行图4中入离场状态监测装置执行的方法,并实现入离场状态监测装置在图4所示实施例的功能,本申请实施例在此不再赘述。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的电子设备执行时,能够使该电子设备执行图4所示实施例中入离场状态监测装置执行的方法,并具体用于执行:
接收第一指令,第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,对象位于监听检测系统的距离检测范围内,距离检测范围与图像采集系 统相对应;
获取图像采集系统的图像采集区域内的图像,以便监听检测系统根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态。
本申请实施例中,可以利用图像采集系统获取图像采集区域内的图像,进而对采集到的图像进行对象识别,从而根据识别结果确定目标对象的状态。因此,能够较为准确的判断预设对象是否入场和/或离场。与此同时,通过监听距离检测范围内的对象与目标对象之间的距离,且仅当监听到的距离满足第一预设条件时,才激活图像采集系统以获取其图像采集区域内的图像,因此,可以有效降低系统功耗,满足应用需求。
本申请实施例还提供了一种应用系统,包括监听检测系统、图像采集系统、对象识别系统、业务系统,其中:
监听检测系统,监听距离检测范围内的对象与目标对象之间的距离;还当对象与目标对象之间的距离满足第一预设条件时,向与距离检测范围相对应的图像采集系统发送第一指令,以便激活图像采集系统获取其图像采集区域内的图像;还根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态;
图像采集系统,接收第一指令,第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,对象位于监听检测系统的距离检测范围内,距离检测范围与图像采集系统相对应;还获取图像采集系统的图像采集区域内的图像,以便监听检测系统根据对图像采集区域内的图像进行对象识别得到的识别结果,确定目标对象的状态,目标对象的状态包括入场状态和/或离场状态;
对象识别系统,接收图像采集区域内的图像,并对图像采集区域内的图像进行对象识别,得到识别结果;还返回识别结果;
业务系统,接收目标对象的状态,并确定与目标对象的状态相对应的业务流程。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设 备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不 排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (19)

  1. 一种入离场状态检测方法,由监听检测系统执行,所述方法包括:
    监听距离检测范围内的对象与目标对象之间的距离;
    当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
    根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  2. 根据权利要求1所述方法,所述第一预设条件包括以下至少一项:
    所述对象与目标对象之间的距离小于第一预设阈值;
    所述对象与目标对象之间的距离在第一预设时间段内的统计值小于第二预设阈值;
    所述对象与目标对象之间的距离在第一预设时间间隔的差值大于第三预设阈值;
    所述对象与目标对象之间的距离大于第四预设阈值;
    所述对象与目标对象之间的距离在第二预设时间段内的统计值大于第五预设阈值。
  3. 根据权利要求1所述方法,在根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态之前,所述方法还包括:
    将所述图像采集区域内的图像发送至对象识别系统,供所述对象识别系统对所述图像采集区域内的图像进行对象识别,得到所述识别结果;
    接收所述对象识别系统返回的所述识别结果。
  4. 根据权利要求3所述方法,所述对象识别系统为云端对象识别系统。
  5. 根据权利要求3所述方法,所述识别结果包括以下至少一项:
    所述图像采集区域内的图像中是否包含预设对象的判断结论;
    所述图像采集区域内的图像中包含的预设对象的数量。
  6. 根据权利要求5所述方法,根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,包括以下至少一项:
    当所述图像采集区域内的图像中包含所述预设对象时,确定所述目标对象的状态为入场状态;
    当所述图像采集区域内的图像中不包含所述预设对象时,确定所述目标对象的状态为离场状态;
    当所述图像采集区域内的图像中包含的所述预设对象的数量大于零时,确定所述目标对象的状态为入场状态;
    当所述图像采集区域内的图像中包含的所述预设对象的数量为零时,确定所述目标对象的状态为离场状态。
  7. 根据权利要求1~6之任一所述方法,在确定所述目标对象的状态之后,所述方法还包括:
    将所述目标对象的状态发送至业务系统,供所述业务系统确定与所述目标对象的状态相对应的业务流程。
  8. 根据权利要求1~6之任一所述方法,在确定所述目标对象的状态之后,所述方法还包括:
    向与所述距离检测范围相对应的图像采集系统发送第二指令,以便关闭所述图像采集系统或者将所述图像采集系统切换为待机模式。
  9. 根据权利要求8所述方法,向与所述距离检测范围相对应的图像采集系统发送第二指令,包括:
    当所述对象与目标对象之间的距离满足第二预设条件时,向与所述距离检测范围相对应的图像采集系统发送所述第二指令;
    其中,所述第二预设条件包括:所述对象与目标对象之间的距离在第二预设时间间隔的差值小于第六预设阈值。
  10. 根据权利要求1~6之任一所述方法,在确定所述目标对象的状态之后,所述方法还包括:
    记录所述目标对象的状态;
    根据所述目标对象的状态,确定所述第一预设条件。
  11. 一种入离场状态检测方法,由图像采集系统执行,所述方法包括:
    接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
    获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  12. 根据权利要求11所述方法,在获取所述图像采集系统的图像采集区域内的图像之后,所述方法还包括:
    接收第二指令,所述第二指令由所述监听检测系统在确定所述目标对象的状态之后发送;
    根据所述第二指令,所述图像采集系统关闭或者切换为待机模式。
  13. 一种入离场状态检测装置,应用于监听检测系统,所述装置包括:
    距离监听模块,监听距离检测范围内的对象与目标对象之间的距离;
    第一指令发送模块,当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
    状态确定模块,根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  14. 一种入离场状态检测装置,应用于图像采集系统,所述装置包括:
    第一指令接收模块,接收监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送的第一指令;所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
    图像获取模块,获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  15. 一种电子设备,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
    监听距离检测范围内的对象与目标对象之间的距离;
    当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
    根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:
    监听距离检测范围内的对象与目标对象之间的距离;
    当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;
    根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  17. 一种电子设备,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
    接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
    获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所 述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  18. 一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:
    接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;
    获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态。
  19. 一种应用系统,包括监听检测系统、图像采集系统、对象识别系统、业务系统,其中:
    所述监听检测系统,监听距离检测范围内的对象与目标对象之间的距离;还当所述对象与目标对象之间的距离满足第一预设条件时,向与所述距离检测范围相对应的图像采集系统发送第一指令,以便激活所述图像采集系统获取其图像采集区域内的图像;还根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态;
    所述图像采集系统,接收第一指令,所述第一指令由监听检测系统在对象与目标对象之间的距离满足第一预设条件时发送,所述对象位于所述监听检测系统的距离检测范围内,所述距离检测范围与所述图像采集系统相对应;还获取所述图像采集系统的图像采集区域内的图像,以便所述监听检测系统根据对所述图像采集区域内的图像进行对象识别得到的识别结果,确定所述目标对象的状态,所述目标对象的状态包括入场状态和/或离场状态;
    所述对象识别系统,接收所述图像采集区域内的图像,并对所述图像采集区域内的图像进行对象识别,得到所述识别结果;还返回所述识别结果;
    所述业务系统,接收所述目标对象的状态,并确定与所述目标对象的状态相对应的业务流程。
PCT/CN2019/073120 2018-02-08 2019-01-25 入离场状态检测方法和装置 WO2019154112A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
SG11202005455YA SG11202005455YA (en) 2018-02-08 2019-01-25 Active/inactive state detection method and apparatus
JP2020536646A JP6916394B2 (ja) 2018-02-08 2019-01-25 アクティブ/非アクティブ状態検出方法および装置
KR1020207018820A KR102366681B1 (ko) 2018-02-08 2019-01-25 활성/비활성 상태 검출 방법 및 장치
EP19751705.5A EP3716142A4 (en) 2018-02-08 2019-01-25 INPUT / OUTPUT STATE DETECTION PROCESS AND DEVICE
US16/889,622 US11102458B2 (en) 2018-02-08 2020-06-01 Active/inactive state detection method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810127142.2A CN108427914B (zh) 2018-02-08 2018-02-08 入离场状态检测方法和装置
CN201810127142.2 2018-02-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/889,622 Continuation US11102458B2 (en) 2018-02-08 2020-06-01 Active/inactive state detection method and apparatus

Publications (1)

Publication Number Publication Date
WO2019154112A1 true WO2019154112A1 (zh) 2019-08-15

Family

ID=63156823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073120 WO2019154112A1 (zh) 2018-02-08 2019-01-25 入离场状态检测方法和装置

Country Status (8)

Country Link
US (1) US11102458B2 (zh)
EP (1) EP3716142A4 (zh)
JP (1) JP6916394B2 (zh)
KR (1) KR102366681B1 (zh)
CN (2) CN108427914B (zh)
SG (1) SG11202005455YA (zh)
TW (1) TWI692728B (zh)
WO (1) WO2019154112A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116893384A (zh) * 2023-09-11 2023-10-17 南京中旭电子科技有限公司 数字霍尔传感器监测方法及平台

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427914B (zh) 2018-02-08 2020-08-18 阿里巴巴集团控股有限公司 入离场状态检测方法和装置
CN113452926B (zh) * 2018-10-26 2023-01-13 创新先进技术有限公司 图像采集设备、系统及方法
CN110018646A (zh) * 2019-04-19 2019-07-16 北京潞电电气设备有限公司 一种电力设备操作规范监控系统
CN110084183A (zh) * 2019-04-25 2019-08-02 杭州鸿雁电器有限公司 确定人员进出区域的方法和系统
CN112207812A (zh) * 2019-07-12 2021-01-12 阿里巴巴集团控股有限公司 设备控制方法、设备、系统及存储介质
CN110427887B (zh) * 2019-08-02 2023-03-10 腾讯科技(深圳)有限公司 一种基于智能的会员身份识别方法及装置
CN110661973B (zh) * 2019-09-29 2022-04-22 联想(北京)有限公司 一种控制方法及电子设备
CN110826506A (zh) * 2019-11-11 2020-02-21 上海秒针网络科技有限公司 目标行为的识别方法及装置
CN111507318A (zh) * 2020-07-01 2020-08-07 口碑(上海)信息技术有限公司 基于图像识别的离店检测方法及装置
CN112906483B (zh) * 2021-01-25 2024-01-23 中国银联股份有限公司 一种目标重识别方法、装置及计算机可读存储介质
CN113091730B (zh) * 2021-03-25 2023-07-07 杭州海康威视系统技术有限公司 一种轨迹确定方法及装置
CN113610004B (zh) * 2021-08-09 2024-04-05 上海擎朗智能科技有限公司 一种图像处理方法、机器人及介质
CN113701893B (zh) * 2021-08-30 2023-05-02 杭州睿影科技有限公司 测温方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915784A (zh) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 信息处理方法和装置
CN107278301A (zh) * 2016-12-30 2017-10-20 深圳前海达闼云端智能科技有限公司 一种辅助用户寻物的方法及装置
CN107378949A (zh) * 2017-07-22 2017-11-24 深圳市萨斯智能科技有限公司 一种机器人检测物体的方法和机器人
CN107589707A (zh) * 2017-08-16 2018-01-16 深圳市启惠智能科技有限公司 一种监控处理方法、服务器及计算机存储介质
CN108427914A (zh) * 2018-02-08 2018-08-21 阿里巴巴集团控股有限公司 入离场状态检测方法和装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000099837A (ja) * 1998-09-18 2000-04-07 Toshiba Corp 監視システム
JP3992909B2 (ja) * 2000-07-03 2007-10-17 富士フイルム株式会社 本人画像提供システム
US6580360B1 (en) * 2000-12-13 2003-06-17 Digibot, Inc. Smart table
TWI244624B (en) * 2004-06-04 2005-12-01 Jin-Ding Lai Device and method for defining an area whose image is monitored
TWI357582B (en) * 2008-04-18 2012-02-01 Univ Nat Taiwan Image tracking system and method thereof
JP5347549B2 (ja) * 2009-02-13 2013-11-20 ソニー株式会社 情報処理装置および情報処理方法
CN103425443A (zh) * 2012-05-22 2013-12-04 联想(北京)有限公司 一种控制方法、系统和电子设备
JP6090559B2 (ja) * 2012-10-04 2017-03-08 三菱自動車工業株式会社 発進安全装置
JP6044472B2 (ja) * 2013-06-28 2016-12-14 富士ゼロックス株式会社 情報処理装置及びプログラム
JP5590193B1 (ja) * 2013-06-28 2014-09-17 富士ゼロックス株式会社 情報処理装置及びプログラム
CN103778577B (zh) * 2013-08-30 2017-08-29 陈飞 一种依据餐具信息调控餐桌并记录用餐信息的方法及装置
US20160180712A1 (en) * 2015-08-27 2016-06-23 Sparkcity.Com Ltd. Citywide parking reservation system and method
CN105472231B (zh) * 2014-09-03 2019-03-29 联想(北京)有限公司 控制方法、图像采集装置和电子设备
CN105100730A (zh) * 2015-08-21 2015-11-25 联想(北京)有限公司 一种监控方法及摄像头装置
US10043374B2 (en) * 2015-12-30 2018-08-07 Lenovo (Beijing) Limited Method, system, and electronic device for monitoring
KR101815144B1 (ko) * 2016-07-05 2018-01-05 이응수 얼굴인식 기반의 사진 공유 방법 및 이를 이용한 사진 공유 시스템
US11311210B2 (en) * 2016-07-14 2022-04-26 Brightday Technologies, Inc. Posture analysis systems and methods
CN107666589A (zh) * 2016-07-29 2018-02-06 中兴通讯股份有限公司 一种远程监控方法及设备
CN106603969A (zh) * 2016-11-04 2017-04-26 乐视控股(北京)有限公司 一种视频监控方法、装置和系统以及探测设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915784A (zh) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 信息处理方法和装置
CN107278301A (zh) * 2016-12-30 2017-10-20 深圳前海达闼云端智能科技有限公司 一种辅助用户寻物的方法及装置
CN107378949A (zh) * 2017-07-22 2017-11-24 深圳市萨斯智能科技有限公司 一种机器人检测物体的方法和机器人
CN107589707A (zh) * 2017-08-16 2018-01-16 深圳市启惠智能科技有限公司 一种监控处理方法、服务器及计算机存储介质
CN108427914A (zh) * 2018-02-08 2018-08-21 阿里巴巴集团控股有限公司 入离场状态检测方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3716142A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116893384A (zh) * 2023-09-11 2023-10-17 南京中旭电子科技有限公司 数字霍尔传感器监测方法及平台
CN116893384B (zh) * 2023-09-11 2023-12-01 南京中旭电子科技有限公司 数字霍尔传感器监测方法及平台

Also Published As

Publication number Publication date
EP3716142A1 (en) 2020-09-30
TW201935309A (zh) 2019-09-01
TWI692728B (zh) 2020-05-01
US11102458B2 (en) 2021-08-24
CN108427914B (zh) 2020-08-18
CN111652197A (zh) 2020-09-11
JP6916394B2 (ja) 2021-08-11
KR102366681B1 (ko) 2022-03-21
EP3716142A4 (en) 2021-01-20
CN111652197B (zh) 2023-04-18
CN108427914A (zh) 2018-08-21
KR20200093016A (ko) 2020-08-04
US20200296335A1 (en) 2020-09-17
JP2021513695A (ja) 2021-05-27
SG11202005455YA (en) 2020-07-29

Similar Documents

Publication Publication Date Title
WO2019154112A1 (zh) 入离场状态检测方法和装置
US11069217B2 (en) Sensor configuration
US9946357B2 (en) Control using movements
US9984590B2 (en) Identifying a change in a home environment
CN111788821B (zh) 一种用于检测电子设备附近的表面的方法和装置
CN106603969A (zh) 一种视频监控方法、装置和系统以及探测设备
US20230000302A1 (en) Cleaning area estimation device and method for estimating cleaning area
US10475310B1 (en) Operation method for security monitoring system
US10540542B2 (en) Monitoring
CN110602197A (zh) 物联网控制装置和方法、电子设备
CN113721232B (zh) 目标对象检测方法、装置、电子设备及介质
CN105807928B (zh) 一种任意墙面互动系统及其扫描误差处理方法
CN112731364B (zh) 毫米波雷达智能厕位管理方法、系统、平台、介质及设备
US20240135686A1 (en) Method and electronic device for training neural network model by augmenting image representing object captured by multiple cameras
CN106597455A (zh) 利用超音波测距避免碰撞发生的方法及其系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19751705

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207018820

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2020536646

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019751705

Country of ref document: EP

Effective date: 20200625

NENP Non-entry into the national phase

Ref country code: DE