CN111428564A - Method and device for acquiring call prompt information, storage medium and terminal - Google Patents

Method and device for acquiring call prompt information, storage medium and terminal Download PDF

Info

Publication number
CN111428564A
CN111428564A CN202010114487.1A CN202010114487A CN111428564A CN 111428564 A CN111428564 A CN 111428564A CN 202010114487 A CN202010114487 A CN 202010114487A CN 111428564 A CN111428564 A CN 111428564A
Authority
CN
China
Prior art keywords
information
prompt
behavior
user
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010114487.1A
Other languages
Chinese (zh)
Inventor
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Koubei Network Technology Co Ltd
Original Assignee
Zhejiang Koubei Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Koubei Network Technology Co Ltd filed Critical Zhejiang Koubei Network Technology Co Ltd
Priority to CN202010114487.1A priority Critical patent/CN111428564A/en
Publication of CN111428564A publication Critical patent/CN111428564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a method and a device for acquiring call prompt information, a storage medium and a terminal, relates to the technical field of data processing, and mainly aims to solve the problem of low acquisition efficiency of the existing call prompt information. The method comprises the following steps: collecting behavior information of a user; identifying whether the behavior information is matched with preset action information or not, wherein the preset action information is a reference action for indicating a user to carry out call prompt; and if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information. The method is mainly used for acquiring the call prompt information.

Description

Method and device for acquiring call prompt information, storage medium and terminal
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for acquiring call prompt information, a storage medium, and a terminal.
Background
With the continuous development of internet technology, in order to reflect convenience in the ordering process, the way in which the user prompts the waiter to order becomes various when ordering. If in scanning ordering or wisdom dining room, the user can scan the sign indicating number through the table sign indicating number that the dining shop disposed and queue to the operation of ordering is carried out to a small amount of waiters of suggestion idle, can also point the order through the calling set suggestion service personnel that dispose on the dining table of dining shop.
At present, the calling prompt information is generated by scanning a table code at present, a specific APP needs to be installed in advance for calling, so that the operation steps of calling a waiter by a user are relatively complex, the speed of obtaining the calling prompt is reduced, the calling prompt is carried out by utilizing a caller, manual monitoring is needed, the calling prompt information sent by the caller cannot be timely and accurately used for calling the waiter, and the obtaining efficiency of the calling prompt information is reduced.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for acquiring call prompt information, a storage medium, and a terminal, and mainly aims to solve the problem of low efficiency in acquiring the existing call prompt information.
According to an aspect of the present invention, there is provided a method for acquiring call prompt information, including:
collecting behavior information of a user;
identifying whether the behavior information is matched with preset action information or not, wherein the preset action information is a reference action for indicating a user to carry out call prompt;
and if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information.
Further, the collecting the behavior information of the user includes:
behavior image data of a user is obtained, and behavior information is extracted by identifying a frame image in the behavior image data.
Further, the determining table information corresponding to the behavior information includes:
any frame image containing the behavior information is extracted from the behavior image data, and the frame image is compared with a preset table position identification image to obtain the table position information corresponding to the behavior information.
Further, the method further comprises:
and outputting the calling prompt information bound with the table information so as to determine the calling task information according to the calling prompt information.
Further, the method further comprises:
receiving prompt state information corresponding to the call task information determined according to the call prompt information;
and outputting prompt information corresponding to the prompt state information.
Further, the method further comprises:
and outputting prompt information of the preset action to prompt a user to carry out reference action of call prompt.
Further, before outputting the prompt message of the preset action, the method further includes:
collecting position state information of a user, and identifying whether the position state information meets a preset prompt condition;
the outputting the prompt information of the preset action comprises:
and outputting prompt information of the preset action when the position state information is identified to accord with preset prompt conditions.
Further, the collecting the location state information of the user comprises:
and receiving state detection information of a user trigger state detection response, and counting position state information according to the state detection information.
According to another aspect of the present invention, there is provided an apparatus for acquiring call prompt information, including:
the acquisition module is used for acquiring behavior information of a user;
the identification module is used for identifying whether the behavior information is matched with preset action information, and the preset action information is a reference action for indicating a user to carry out calling prompt;
and the generating module is used for determining the table position information corresponding to the behavior information and generating the call prompt information of the table position information if the table position information is matched with the behavior information.
Further, the acquisition module is specifically configured to acquire behavior image data of a user, and extract behavior information by identifying a frame image in the behavior image data.
Further, the generating module is specifically configured to extract any frame image including the behavior information from the behavior image data, and compare the frame image with a preset table identifier image to obtain table information corresponding to the behavior information.
Further, the apparatus further comprises:
and the output module is used for outputting the calling prompt information bound with the table position information so as to determine the calling task information according to the calling prompt information.
Further, the apparatus further comprises: a receiving module for receiving the data from the data processing module,
the receiving module is used for receiving prompt state information corresponding to the call task information determined according to the call prompt information;
the output module is further configured to output prompt information corresponding to the prompt state information.
Further, the output module is further configured to output prompt information of the preset action to prompt a user to perform a reference action of call prompt.
Further, the acquisition module is further configured to acquire position state information of a user and identify whether the position state information meets a preset prompt condition;
and the output module is also used for outputting the prompt information of the preset action when the position state information is identified to accord with the preset prompt condition.
Further, the acquisition module is specifically configured to receive state detection information of a user triggered state detection response, and count the position state information according to the state detection information.
According to another aspect of the present invention, a storage medium is provided, where at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to perform operations corresponding to the above method for acquiring call prompt information.
According to still another aspect of the present invention, there is provided a terminal including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the method for acquiring the call prompt information.
By the technical scheme, the technical scheme provided by the embodiment of the invention at least has the following advantages:
compared with the prior art that the calling prompt information is generated by scanning a table code or calling prompt is carried out by using a caller, the method and the device for acquiring the calling prompt information have the advantages that whether the behavior information of the user is matched with the preset action information which is used as the reference action for indicating the user to carry out the calling prompt is identified, if the behavior information of the user is matched with the preset action information, the table position information corresponding to the behavior information is determined, the calling prompt information of the table position information is generated, the operation steps of the calling prompt are simplified, the acquisition speed of the calling prompt is increased, the calling prompt information is timely and accurately acquired through the matching with the preset action and the determination of the table position information, artificial monitoring is not needed, the accuracy of the generation of the calling prompt information is ensured, and the acquisition efficiency of the calling prompt information is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a method for acquiring call alert information according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating another method for acquiring call alert information according to an embodiment of the present invention;
fig. 3 shows a schematic diagram of a call prompt information generation flow provided by an embodiment of the present invention:
fig. 4 shows another schematic diagram of a call prompt information generation flow provided by the embodiment of the present invention:
fig. 5 is a block diagram illustrating an apparatus for obtaining call prompt information according to an embodiment of the present invention;
fig. 6 is a block diagram illustrating another apparatus for obtaining call prompt information according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides a method for acquiring call prompt information, which comprises the following steps that as shown in figure 1:
101. and collecting the behavior information of the user.
The behavior information is used to determine whether the user sends a call prompt, and therefore, the collected behavior information may include real-time behavior information generated by the user, including all behavior information captured by the camera device.
It should be noted that, in order to collect all behavior information of all users at the front end, behaviors of the users are recorded by the camera devices installed at different positions in the physical store, and the camera devices may be fisheye cameras with a large shooting angle, so that the behavior information of all users in the physical store can be collected in a covering manner. In addition, the behavior image data or the behavior name pre-stored in the database is used as a comparison basis for acquiring the behavior information by using the point single line, when the behavior image is acquired, the behavior image data or the behavior name in the database is compared according to the point single line, and the behavior information of the user is determined from the acquired user behavior image.
102. And identifying whether the behavior information is matched with preset action information.
The preset action information is a reference action for indicating the user to call and prompt, and when the user generates a corresponding action according to the reference action, the user can be determined to call a service staff for ordering or consulting and the like. The preset action may be a predefined designated action, for example, waving both hands and smiling at the same time, and the embodiment of the present invention is not limited in particular.
It should be noted that, because the preset action is a reference action for instructing the user to perform a call prompt, the preset action information needs to be configured as an action that the user does not frequently perform when being configured in advance, and may include one action or a combination of multiple actions, and the embodiment of the present invention is not limited specifically. In addition, as the collected behavior information is video stream data, for identifying whether the behavior information is matched with the preset action information, the behavior information and the preset action information can be specifically identified through image identification so as to identify whether the behavior information is matched with the preset action information.
103. And if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information.
For the embodiment of the invention, when the behavior information generated by the user is determined to be matched with the preset action information, the user needs to order, and further, the table position information corresponding to the behavior information is determined to be generated, namely, which table position needs to order the point, and the call prompt information is generated according to the table position information. The user is a user who calls at different desk positions, and in order to make the call prompt more ready, the desk position information of the call prompt can be determined by identifying the specific desk position where the user is located in the entity store, and the call prompt information is correspondingly generated according to the desk position information. Specifically, the table information corresponding to the table where the user is located in the physical store, such as a table label, may be identified through an image identification technology, which is not specifically limited in the embodiment of the present invention.
Compared with the prior art that the calling prompt information is generated by scanning a table code or calling prompt is performed by using a caller, the method for acquiring the calling prompt information provided by the embodiment of the invention has the advantages that whether the behavior information of the user is matched with the preset action information which is used as the reference action for indicating the user to perform the calling prompt is identified, if the behavior information of the user is matched with the preset action information, the table position information corresponding to the behavior information is determined, and the calling prompt information of the table position information is generated, so that the operation steps of calling prompt are simplified, the acquisition speed of the calling prompt is increased, the calling prompt information is timely and accurately acquired through the matching with the preset action and the determination of the table position information, and the ordering efficiency is increased without artificial monitoring.
The embodiment of the invention provides another method for acquiring call prompt information, as shown in fig. 2, the method comprises the following steps:
201. the method comprises the steps of collecting position state information of a user and identifying whether the position state information meets preset prompt conditions.
For the embodiment of the invention, the prompt information of the preset action can be voice prompt information or image prompt information, so that the prompt of the reference action is more accurate, a user can accurately generate corresponding call prompt information according to the reference action, the position state information of the user is collected in real time, whether the preset prompt condition is met is identified, and whether the state of the user triggers the output of the prompt information corresponding to the preset action is identified. The position state information of the user is the position state of the user entering the entity shop, and is divided into a shop entering position state and a shop leaving position state so as to identify whether the shop entering position state identification meets the preset prompting condition, wherein the preset prompting condition is the condition set in the shop entering position state so as to prompt the user entering the shop for reference action.
For further limitation and description, the collecting the location state information of the user in the embodiment of the present invention includes: and receiving state detection information of a user trigger state detection response, and counting position state information according to the state detection information.
In the embodiment of the invention, for the acquisition of the position state information, whether a person is present at the first position and the second position can be detected by configuring 2 detectors, i.e., by receiving status detection information of a status detection response, the status detection information includes first location information, second location information, when the first position information and the second position information appear in the order of the first position information to the second position information, it is determined as an entrance to the shop location state, and when the order of the first location information and the second location information is from the second location information to the first location information, it is determined as an exit from the shop location state, wherein the first location information is relative to an off-store location, the second location information is relative to an in-store location, therefore, the position state information is counted according to the state detection information, and the embodiment of the invention is not particularly limited.
For example, the state detection information for receiving the user triggered state detection response includes first position information and second position information, the position state information is counted as the position state of the entering store before the second position information according to the sequence of detecting the first position information, and the position state of the entering store is identified to meet the preset prompting condition.
202. And outputting prompt information of the preset action when the position state information is identified to accord with preset prompt conditions.
For the embodiment of the invention, when the preset prompting condition is met, the prompt information of the preset action is output to prompt the user to carry out the reference action of calling prompt, which indicates that the user enters the entity shop. The preset action may be a preset action, and the corresponding prompt information is used as prompt content for prompting the user to perform the order-ordering action when the user orders the order, for example, the prompt information is "please smile the camera and give a hand to call the server".
It should be noted that, since the preset action is generated by the user, in order to avoid that the configured preset action is a commonly used action of the user and loses a specific function as a call prompt, the preset action may be updated according to a preset time interval, so as to update the corresponding prompt information.
203. Behavior image data of a user is obtained, and behavior information is extracted by identifying a frame image in the behavior image data.
For the embodiment of the invention, in order to accurately acquire the behavior information of the user, the behavior of the user is shot by the camera equipment arranged at each corner of the entity shop, and the behavior image data of the user is acquired, namely the video data stream. As the video data stream is continuous video data, in order to accurately identify the behavior information of the user, the frame images in the behavior image data, namely the video images of each frame, are split, the action of the user in each frame image is identified through an image identification technology, and the behavior information is extracted. The behavior information extracted by using the image recognition technology may be image data of a behavior, or may also be an action name corresponding to the recognized behavior, such as a Sift algorithm, to obtain a key point of a user behavior in an image, attach a corresponding descriptor to the key point, find out a plurality of pairs of feature points that are matched with each other by comparing 2 or more feature points, and establish a correspondence relationship of the action, i.e., image data of the behavior information or an action name corresponding to the behavior, where the action name is represented by a correspondence relationship between the action name and a behavior image recognition result in a behavior database, which is not specifically limited in the embodiment of the present invention.
204. And identifying whether the behavior information is matched with preset action information.
This step is the same as step 102 shown in fig. 1, and is not described herein again.
It should be noted that, the behavior information may be image data of a behavior or an action name corresponding to the behavior, and therefore, when identifying whether the behavior information matches the preset action information, the behavior information may be specifically classified into image data or action name for matching and identifying, for example, the behavior information is image data of a user waving both hands, and the preset action information is image data of a user waving both hands and smiling, and the image data is compared and matched by using 2 pieces of image data for identification. If the extracted action information is the name of the action the user stands up, and the preset action information is the name of the waving action in step 203, 2 action names are used for comparison and matching.
In addition, because behaviors of different users may cause certain deviation or error between the behavior information and the preset action information due to amplitude, stature, speed and the like, a matching threshold value can be set in the process of identifying whether to match, so that the robustness of identification is improved, and the identification accuracy of the user behaviors is expanded.
205. And if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information.
This step is the same as step 103 shown in fig. 1, and is not described herein again.
For example, as shown in fig. 3, a fisheye camera acquires user behavior information for an image source within a shooting range, and identifies whether the user behavior information matches preset action information by using an image identification technology, if so, the user behavior information is determined to be a call prompting intention of the user, then a table position corresponding to the behavior information is determined by using position comparison, the analyzed table position information is used to generate call prompting information of the table position information, and the call prompting information is transmitted to a service task allocation system, such as a server system, to allocate the call task information.
For further limitation and description, the determining table information corresponding to the behavior information according to the embodiment of the present invention includes: any frame image containing the behavior information is extracted from the behavior image data, and the frame image is compared with a preset table position identification image to obtain the table position information corresponding to the behavior information.
Because the behavior information of the user collected in the embodiment of the invention is the behavior content in the behavior image data, in order to determine which user sends the call prompt, it is necessary to determine which table the call prompt needs to be generated, so that the service staff can order the corresponding table. When shooting the user behavior, the camera shooting device can shoot an image of a table where the user is located at the same time, and the information of the table where the user is located, namely the corresponding table identification, can be determined by comparing a preset table identification image with any shot frame image containing behavior information. For example, a frame image of any hand waving action is extracted from the hand waving image data, the frame image is compared with a preset table identification image to obtain table information of the user in the preset table identification image, and if the user at table 2 waves his hand, the table corresponding to the hand waving action is determined to be table 2.
206. And outputting the call prompt information bound with the table position information.
For the embodiment of the invention, in order to quickly call a waiter according to the call prompt information for the service task, the call prompt information bound with the table information is output, so that the call task information is determined according to the call prompt information. In order to accurately distribute the service task information according to the call prompt information, the current end can send the call prompt information to the service task distribution system, so that the service task distribution system distributes the corresponding call prompt information according to the service task state.
Further, in order to determine whether the call prompt information is successfully allocated to the corresponding call task information, so as to determine whether to regenerate the call prompt information, the embodiment of the present invention further includes: receiving prompt state information corresponding to the call task information determined according to the call prompt information; and outputting prompt information corresponding to the prompt state information.
The service task allocation system generates corresponding call task information according to the call prompt information, and then generates prompt state information of the call prompt information to determine whether the service task allocation is finished or not, and sends the prompt state information to the current end. Therefore, the current end receives and outputs the prompt state information corresponding to the call task information determined according to the call prompt information, for example, the task allocation information of the No. 1 desk position call prompt information is determined, and the prompt state information can be output through voice or image information. In the embodiment of the invention, the method is applied to a process of calling a service person in a shop, as shown in fig. 4, a user makes an action of calling the service person, after an image is collected by a fisheye camera, the action information of the user is identified by using an sdk image identification technology to determine whether the action information is matched with a preset action so as to determine the intention of the user for calling the service person and the table position information of the user, call prompt information generated according to the table position information is sent to a service person task system, the service person task system determines a call task corresponding to the call prompt information and then broadcasts through a voice service system so as to inform the user of call state information, and the service person corresponding to the work of the call task carries out operations such as ordering and consulting.
Compared with the prior art that the calling prompt information is generated by scanning a table code or calling prompt is performed by using a caller, the method for acquiring the calling prompt information provided by the embodiment of the invention has the advantages that whether the behavior information of the user is matched with the preset action information of the reference action for indicating the user to perform the calling prompt is identified, if the behavior information of the user is matched with the preset action information, the table position information corresponding to the behavior information is determined, and the calling prompt information of the table position information is generated, so that the operation steps of calling prompt are simplified, the acquisition speed of the calling prompt is increased, the calling prompt information is timely and accurately acquired through the matching with the preset action and the determination of the table position information, and the ordering efficiency is improved without artificial monitoring.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention provides an apparatus for acquiring call prompt information, where as shown in fig. 5, the apparatus includes: an acquisition module 31, an identification module 32 and a generation module 33.
The acquisition module 31 is used for acquiring behavior information of a user;
the identification module 32 is configured to identify whether the behavior information matches preset action information, where the preset action information is a reference action indicating that a user performs a call prompt;
and a generating module 33, configured to determine, if the table information is matched with the behavior information, the table information corresponding to the behavior information, and generate a call prompt message of the table information.
Compared with the prior art that the call prompt information is generated by scanning a table code or the call prompt is performed by using a caller, the call prompt information acquisition device provided by the embodiment of the invention acquires the behavior information of the user, identifies whether the behavior information is matched with the preset action information which is used as the reference action for indicating the user to perform the call prompt, determines the table position information corresponding to the behavior information if the behavior information is matched with the preset action information, and generates the call prompt information of the table position information, so that the operation steps of the call prompt are simplified, the acquisition speed of the call prompt is increased, the call prompt information is timely and accurately acquired through the matching with the preset action and the determination of the table position information, and the order counting efficiency is improved without artificial monitoring.
Further, as an implementation of the method shown in fig. 2, an embodiment of the present invention provides another apparatus for acquiring call alert information, as shown in fig. 6, where the apparatus includes: the device comprises an acquisition module 41, an identification module 42, a generation module 43, an output module 44 and a receiving module 45.
An acquisition module 41, configured to acquire behavior information of a user;
the identification module 42 is configured to identify whether the behavior information matches preset action information, where the preset action information is a reference action indicating that a user performs a call prompt;
and a generating module 43, configured to determine, if the table information is matched with the behavior information, the table information corresponding to the behavior information, and generate a call prompt message of the table information.
Further, the acquisition module 41 is specifically configured to acquire behavior image data of a user, and extract behavior information by identifying a frame image in the behavior image data.
Further, the generating module 43 is specifically configured to extract any frame image including the behavior information from the behavior image data, and compare the frame image with a preset table identifier image to obtain the table information corresponding to the behavior information.
Further, the apparatus further comprises:
and the output module 44 is used for outputting the call prompt information bound with the table position information so as to determine the call task information according to the call prompt information.
Further, the apparatus further comprises: the reception means 45 are arranged to receive the data,
the receiving module 45 is configured to receive prompt status information corresponding to the call task information determined according to the call prompt information;
the output module 44 is further configured to output prompt information corresponding to the prompt state information.
Further, the output module 44 is further configured to output prompt information of the preset action to prompt the user to perform a reference action of call prompt.
Further, the collecting module 41 is further configured to collect position status information of a user, and identify whether the position status information meets a preset prompt condition;
the output module 44 is further configured to output a prompt message of the preset action when it is identified that the position status message meets a preset prompt condition.
Further, the acquisition module 41 is specifically configured to receive state detection information of a user triggered state detection response, and count the position state information according to the state detection information.
Compared with the prior art that the call prompt information is generated by scanning a table code or the call prompt is performed by using a caller, the embodiment of the invention identifies whether the behavior information of the user is matched with the preset action information which is used as the reference action for indicating the user to perform the call prompt, if the behavior information of the user is matched with the preset action information, the corresponding table position information of the behavior information is determined, and the call prompt information of the table position information is generated, so that the operation steps of the call prompt are simplified, the acquisition speed of the call prompt is increased, the call prompt information is timely and accurately acquired through the matching with the preset action and the determination of the table position information, and the ordering efficiency is improved without artificial monitoring.
According to an embodiment of the present invention, a storage medium is provided, where the storage medium stores at least one executable instruction, and the computer executable instruction may execute the method for acquiring the call prompt information in any method embodiment described above.
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the terminal.
As shown in fig. 7, the terminal may include: a processor (processor)502, a communication interface 504, a memory 506, and a communication bus 508.
Wherein: the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically execute relevant steps in the above-mentioned method for acquiring the call prompt information.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the invention. The terminal comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may specifically be used to cause the processor 502 to perform the following operations:
collecting behavior information of a user;
identifying whether the behavior information is matched with preset action information or not, wherein the preset action information is a reference action for indicating a user to carry out call prompt;
and if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for acquiring call prompt information is characterized by comprising the following steps:
collecting behavior information of a user;
identifying whether the behavior information is matched with preset action information or not, wherein the preset action information is a reference action for indicating a user to carry out call prompt;
and if so, determining the table position information corresponding to the behavior information, and generating the call prompt information of the table position information.
2. The method of claim 1, wherein the collecting behavior information of the user comprises:
behavior image data of a user is obtained, and behavior information is extracted by identifying a frame image in the behavior image data.
3. The method of claim 2, wherein the determining table information corresponding to the behavior information comprises:
any frame image containing the behavior information is extracted from the behavior image data, and the frame image is compared with a preset table position identification image to obtain the table position information corresponding to the behavior information.
4. The method of claim 1, further comprising:
and outputting the calling prompt information bound with the table information so as to determine the calling task information according to the calling prompt information.
5. The method of claim 4, further comprising:
receiving prompt state information corresponding to the call task information determined according to the call prompt information;
and outputting prompt information corresponding to the prompt state information.
6. The method according to any one of claims 1-5, further comprising:
and outputting prompt information of the preset action to prompt a user to carry out reference action of call prompt.
7. The method of claim 6, wherein before outputting the prompt for the preset action, the method further comprises:
collecting position state information of a user, and identifying whether the position state information meets a preset prompt condition;
the outputting the prompt information of the preset action comprises:
and outputting prompt information of the preset action when the position state information is identified to accord with preset prompt conditions.
8. An apparatus for acquiring call prompt information, comprising:
the acquisition module is used for acquiring behavior information of a user;
the identification module is used for identifying whether the behavior information is matched with preset action information, and the preset action information is a reference action for indicating a user to carry out calling prompt;
and the generating module is used for determining the table position information corresponding to the behavior information and generating the call prompt information of the table position information if the table position information is matched with the behavior information.
9. A storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the method for acquiring call prompt information according to any one of claims 1 to 7.
10. A terminal, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the acquisition method of the call prompt information as set forth in any one of claims 1-7.
CN202010114487.1A 2020-02-25 2020-02-25 Method and device for acquiring call prompt information, storage medium and terminal Pending CN111428564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010114487.1A CN111428564A (en) 2020-02-25 2020-02-25 Method and device for acquiring call prompt information, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010114487.1A CN111428564A (en) 2020-02-25 2020-02-25 Method and device for acquiring call prompt information, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN111428564A true CN111428564A (en) 2020-07-17

Family

ID=71551569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010114487.1A Pending CN111428564A (en) 2020-02-25 2020-02-25 Method and device for acquiring call prompt information, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111428564A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203386326U (en) * 2013-07-29 2014-01-08 深圳市赛格导航科技股份有限公司 Vehicle-mounted terminal gesture alarm device
US20170053401A1 (en) * 2014-05-13 2017-02-23 Omron Corporation Posture estimation device, posture estimation system, posture estimation method, posture estimation program, and computer-readable recording medium on which posture estimation program is recorded
CN106774931A (en) * 2016-12-31 2017-05-31 佛山市幻云科技有限公司 Hospital's call management method, device and system
CN106981032A (en) * 2017-03-31 2017-07-25 旗瀚科技有限公司 A kind of food and drink intelligent robot meal ordering system and method
JPWO2017033847A1 (en) * 2015-08-24 2017-08-31 コニカミノルタ株式会社 Operation accepting apparatus and method for monitored person monitoring system, and monitored person monitoring system
CN107705469A (en) * 2017-09-29 2018-02-16 阿里巴巴集团控股有限公司 Settlement method, the intelligence of having dinner are ordered equipment and intelligent restaurant payment system
CN109447539A (en) * 2018-09-17 2019-03-08 北京云迹科技有限公司 A kind of notification method and distributed robot
CN109460749A (en) * 2018-12-18 2019-03-12 深圳壹账通智能科技有限公司 Patient monitoring method, device, computer equipment and storage medium
WO2019130674A1 (en) * 2017-12-25 2019-07-04 コニカミノルタ株式会社 Abnormal behavior detection device, method, and system for nursing facility
CN110166645A (en) * 2019-05-23 2019-08-23 上海理工大学 A kind of call service system based on electro-ocular signal control
CN110211000A (en) * 2018-02-28 2019-09-06 阿里巴巴集团控股有限公司 Table state information processing method, apparatus and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203386326U (en) * 2013-07-29 2014-01-08 深圳市赛格导航科技股份有限公司 Vehicle-mounted terminal gesture alarm device
US20170053401A1 (en) * 2014-05-13 2017-02-23 Omron Corporation Posture estimation device, posture estimation system, posture estimation method, posture estimation program, and computer-readable recording medium on which posture estimation program is recorded
JPWO2017033847A1 (en) * 2015-08-24 2017-08-31 コニカミノルタ株式会社 Operation accepting apparatus and method for monitored person monitoring system, and monitored person monitoring system
CN106774931A (en) * 2016-12-31 2017-05-31 佛山市幻云科技有限公司 Hospital's call management method, device and system
CN106981032A (en) * 2017-03-31 2017-07-25 旗瀚科技有限公司 A kind of food and drink intelligent robot meal ordering system and method
CN107705469A (en) * 2017-09-29 2018-02-16 阿里巴巴集团控股有限公司 Settlement method, the intelligence of having dinner are ordered equipment and intelligent restaurant payment system
WO2019130674A1 (en) * 2017-12-25 2019-07-04 コニカミノルタ株式会社 Abnormal behavior detection device, method, and system for nursing facility
CN110211000A (en) * 2018-02-28 2019-09-06 阿里巴巴集团控股有限公司 Table state information processing method, apparatus and system
CN109447539A (en) * 2018-09-17 2019-03-08 北京云迹科技有限公司 A kind of notification method and distributed robot
CN109460749A (en) * 2018-12-18 2019-03-12 深圳壹账通智能科技有限公司 Patient monitoring method, device, computer equipment and storage medium
CN110166645A (en) * 2019-05-23 2019-08-23 上海理工大学 A kind of call service system based on electro-ocular signal control

Similar Documents

Publication Publication Date Title
CN108985199B (en) Detection method and device for commodity taking and placing operation and storage medium
CN109871815B (en) Method and device for inquiring monitoring information
CN111008540B (en) Bar code identification method and equipment and computer storage medium
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
CN111368619B (en) Suspicious person detection method, suspicious person detection device and suspicious person detection equipment
CN110874878B (en) Pedestrian analysis method, device, terminal and storage medium
KR101493009B1 (en) Method for front and rear vehicle license plate recognition and system thereof
JPWO2018179586A1 (en) Analysis system, analysis method and program
CN113807342A (en) Method and related device for acquiring equipment information based on image
CN111586432B (en) Method and device for determining air-broadcast live broadcast room, server and storage medium
CN110889314A (en) Image processing method, device, electronic equipment, server and system
CN110765851A (en) Registration method, device and equipment
WO2015193288A1 (en) Fusion-based object-recognition
CN110446082B (en) Advertisement pushing method, information processing method and related product
CN112906646A (en) Human body posture detection method and device
CN111047358A (en) Member information query method and system based on face recognition
US20230084625A1 (en) Photographing control device, system, method, and non-transitory computer-readable medium storing program
CN111246110B (en) Image output method and device, storage medium and electronic device
CN112528265A (en) Identity recognition method, device, equipment and medium based on online conference
CN111428564A (en) Method and device for acquiring call prompt information, storage medium and terminal
CN109190495B (en) Gender identification method and device and electronic equipment
CN111369703A (en) Online time determination method, device, server and medium
CN115391596A (en) Video archive generation method and device and storage medium
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment
CN114373209A (en) Video-based face recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717