WO2020108618A1 - 一种物品提示方法、装置、设备及计算机可读介质 - Google Patents

一种物品提示方法、装置、设备及计算机可读介质 Download PDF

Info

Publication number
WO2020108618A1
WO2020108618A1 PCT/CN2019/121974 CN2019121974W WO2020108618A1 WO 2020108618 A1 WO2020108618 A1 WO 2020108618A1 CN 2019121974 W CN2019121974 W CN 2019121974W WO 2020108618 A1 WO2020108618 A1 WO 2020108618A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
image
current environment
orientation data
data
Prior art date
Application number
PCT/CN2019/121974
Other languages
English (en)
French (fr)
Inventor
黄勤波
杜霁轩
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020108618A1 publication Critical patent/WO2020108618A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying

Definitions

  • Embodiments of the present invention relate to, but are not limited to, a method, device, device, and computer-readable medium for prompting objects.
  • Embodiments of the present invention provide an item prompting method, device, equipment, and computer-readable medium to solve the problem of difficulty in finding items.
  • An embodiment of the present invention provides an item reminding method.
  • the method includes:
  • An embodiment of the present invention provides an item prompting device.
  • the device includes:
  • the environment image obtaining module is configured to obtain an image of the current environment
  • the designated item search module is configured to acquire the image and orientation data of the specified item when receiving a search request to search for the specified item;
  • the designated item prompting module is configured to synthesize the image and orientation data of the designated item into the image of the current environment, and prompt using the display device.
  • An embodiment of the present invention provides an item reminding device.
  • the device includes: a processor and a memory, and the memory stores an item reminding program that can run on the processor, and the item reminding program is processed by the When the device is executed, the steps of the item prompting method are realized.
  • An embodiment of the present invention provides a computer-readable medium on which an item prompting program is stored, and when the item prompting program is executed by a processor, the steps of the item prompting method are implemented.
  • FIG. 1a is a schematic flowchart of an item reminding method according to an embodiment of the present invention.
  • FIG. 1b is a schematic flowchart of another method for prompting an object according to an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram of an article reminding module architecture provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of camera identification and recording provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an implementation process of item location recording and early warning provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an implementation process of item query and output provided by an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for implementing an augmented reality prompt and displaying items provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural block diagram of an article prompting device according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural block diagram of an article prompting device provided by an embodiment of the present invention.
  • the embodiment of the present invention can combine the movement of the augmented reality (AR) device or the virtual reality (Visual Reality, VR) device with the position of the item in the database after storing the items and positions scanned and recognized by the camera in the database, Simulate the display of related items, and then realize the prompt of the user's item name and related image information, which is convenient for the user to quickly locate and control the effect of related indoor items (such as household items).
  • AR augmented reality
  • VR virtual Reality
  • FIG. 1a is a schematic flowchart of an item reminding method according to an embodiment of the present invention. As shown in FIG. 1a, the method may include the following steps.
  • Step S101 Obtain an image of the current environment.
  • Step S103 When receiving a search request to search for a specified item, obtain the image and orientation data of the specified item.
  • Step S104 Synthesize the image and orientation data of the designated item into the image of the current environment, and use the display device to prompt.
  • the embodiments of the present invention combine the items and the current environment in which the items are placed.
  • the display device prompts and displays the items to be searched and their orientation, which facilitates the user to find the items and improve the user experience.
  • FIG. 1b is a schematic flowchart of an item reminding method according to an embodiment of the present invention. As shown in FIG. 1b, the method may include the following steps.
  • Step S101 Obtain an image of the current environment.
  • the trigger condition is a time condition, such as a timing trigger, a specific time trigger, a periodic trigger, and so on.
  • a camera is used for panoramic scanning (or omnidirectional scanning, 360 degree scanning).
  • the image of the current environment may be used to construct a background image, and the background image may be directly displayed by a display device, and the display device may be a VR device or an AR device.
  • Step S102 Identify each item in the image of the current environment, obtain the image and orientation data of each item, and save the image and orientation data of each item.
  • the image of each item is identified from the image of the current environment according to a preset item feature library, and the identified position of each item relative to other items is determined as the Position data for each item.
  • a relative spatial coordinate system may be established based on the image of the current environment scanned by the camera, for example, taking any point in the current environment as the origin of coordinates, and then determining the relative coordinates of each item The coordinate position of the origin is used as the orientation data of each item.
  • the article feature database may be preset locally, or may be downloaded from a network-side server in advance, or may be obtained through self-learning.
  • the article feature database may be updated regularly.
  • Step S103 When receiving a search request to search for a specified item, obtain the image and position data of the specified item from the stored image and position data of each item.
  • Step S104 Synthesize the image and orientation data of the designated item into the image of the current environment, and use the display device to prompt.
  • the image and the orientation data of the designated item are superimposed to the corresponding position in the image of the current environment to obtain an image that prompts the position of the designated item, and finally Using the display device, the image that has prompted the location of the specified item is displayed.
  • a location record database and a location record candidate library may be preset, wherein the historical record data (or historical location) of each item may be recorded in the location record candidate library, and the location record database may record the user Approved common location data (or common location).
  • the display device includes but is not limited to an AR device.
  • any display device capable of displaying the above image may be used.
  • the method may further include: detecting whether there is a moving item in the current environment, and if it is detected that there is a moving item in the current environment, acquiring an image of the moving item, And determine the position data of the moving object after moving.
  • whether there is a moving item in the current environment can be detected by infrared.
  • the method may further include:
  • one item may correspond to one conventional placement position or multiple conventional placement positions; the conventional placement positions corresponding to different articles may be the same or different.
  • the number or frequency of placing the item in each corresponding conventional placement position may be counted, and in the display device, a position with a large number of times or frequency is displayed.
  • the method may further include:
  • the respective conventional placement ranges corresponding to each item are determined respectively, and when it is detected that the at least one item is not placed within the corresponding regular placement range, the Display equipment to provide abnormal warning.
  • the embodiment of the present invention solves the limitation of ordinary human monitoring and recording. Based on big data and automation, it realizes the prompting and displaying of specified items and positions, and improves the user's experience of finding items.
  • the image of the item is displayed in VR or AR mode, and the image and position of the item are displayed in the simulated display, so as to solve the pain points of user query and position indication of the item and improve the user experience.
  • the following describes the embodiments of the present invention in detail with reference to FIGS. 2 to 6.
  • FIG. 2 is a schematic block diagram of an article reminding module architecture provided by an embodiment of the present invention. As shown in FIG. 2, the architecture may include the following modules:
  • the camera control module (or camera monitoring module) 21 is configured to manage one or more cameras in a unified manner, that is, to manage the monitoring of the cameras in a unified manner, and control the cameras to scan periodically to form a cached background, which is subsequently used for the display of AR devices or VR devices. Data can also be directly transferred to AR devices or VR devices in real time.
  • the image and orientation recognition module 22 is configured to recognize the object and record its corresponding orientation data according to the image scanned by the camera control module 21, and store the relevant data on the network hard disk for subsequent AR image or VR image synthesis.
  • Item image and position record database (or item image and position record module) 23, configured to store the identified item image and its position, and record the item’s position historical data, and calculate the commonly used items through big data aggregation operation Orientation.
  • the highlighting is based on its frequency of use, highlighting the intelligence and convenience of the method.
  • the image synthesis module 24 is configured to generate image data for display by an AR device or a VR device through related data provided by the article image and the orientation database 23.
  • the display device 25 is configured to display the synthesized AR image or VR image based on the image data provided by the image synthesizing module 24, which is convenient for the user to visually and focus on viewing the image and related items recorded in the orientation recognition module 22.
  • the display device 25 includes but is not limited to an AR device and a VR device.
  • the working process may include the following steps one to four.
  • Step 1 According to user needs, preset image feature data (or item features or image features) of some common items. Later users can also extract item features through learning and correction, or supplement item features through the user interface.
  • Step 2 The camera control module 21 regularly controls the camera to perform omnidirectional scanning (or panoramic scanning), or scans the moving object according to infrared recognition when an object moves, and provides the scanned image and orientation data to the article image and orientation Recording module 23.
  • omnidirectional scanning or panoramic scanning
  • infrared recognition when an object moves
  • Step 3 The image synthesis module 24 (for example, the VR image synthesis module) performs processing based on the item image and orientation record database 23, which is convenient for marking the specific items and positions in the database in the indoor 360-degree display without blind spots.
  • the image synthesis module 24 (for example, the VR image synthesis module) performs processing based on the item image and orientation record database 23, which is convenient for marking the specific items and positions in the database in the indoor 360-degree display without blind spots.
  • Step 4 When the user wears the display device 25 (such as a VR device), according to the displacement, orientation, and orientation of the VR device and the article image and orientation record database 23 for comparison and calculation, the background of the VR device and the display of related articles are updated in real time, so that the user You can easily locate related items.
  • the display device 25 such as a VR device
  • the camera control module performs a panoramic scan to obtain and cache a home background image (or AR background image), and The home background image is stored to the network hard disk.
  • the image and orientation recognition module recognizes the article image in the home background image according to the article characteristics, determines the orientation data of the identified article image, and records the article image and orientation data, and the article image and orientation data can also be Store to the network hard disk.
  • the image synthesis module obtains the home background image, item image and orientation data from the network hard disk, and marks the item image and location in the home background image for display devices (such as AR devices or VR devices) For display.
  • real-time image processing for example, image VR processing
  • FIG. 4 is a schematic diagram of a camera identification and recording process provided by an embodiment of the present invention. As shown in FIG. 4, the process may include the following steps S401 to S406.
  • Step S401 The server presets the image features of commonly used items to form an image feature library of items as an open source shared library for different users to download and preset.
  • Step S402 After configuring the camera monitoring module 21, the user downloads the image features of common items from the server and presets them to the local camera control system (or camera monitoring system, camera control module 21).
  • Step S403 The camera scans and recognizes the item, and the camera transmits the image data to the "image and orientation recognition module 22".
  • Step S404 record the identified item, and record the item information and its position to the item image and position record database 23.
  • Step S405 The image synthesizing module 24 (for example, an AR image synthesizing module) extracts the camera shooting scene and related background to construct an AR image.
  • the image synthesizing module 24 for example, an AR image synthesizing module
  • Step S406 The image synthesizing module 24 (for example, AR image synthesizing module) uses the relevant images and orientation information of the article image and orientation record database 23 to simulate and construct the simulated articles in the AR image, and visually identify the relevant articles and positions.
  • the image synthesizing module 24 uses the relevant images and orientation information of the article image and orientation record database 23 to simulate and construct the simulated articles in the AR image, and visually identify the relevant articles and positions.
  • FIG. 5 is a schematic diagram of an implementation process of item location recording and early warning provided by an embodiment of the present invention. As shown in FIG. 5, the process may include the following steps S501a to S506.
  • Step S501a The camera monitoring system sets a regular scan period, or specifies a specific scan time, and scans according to the specified time.
  • Step S501b The camera monitoring system can also trigger scanning through the movement of items.
  • Step S502 Record the scanned item and its position information.
  • Step S503 Update the database of item images and position records.
  • Step S504 Statistic the aggregation rule of article orientation.
  • Step S505 an early warning is given to the item that does not conform to the aggregation rule. It is possible that the item is not placed in the normal position, or there may be an abnormal situation in the item.
  • the user can set up an abnormal early warning process for specified items.
  • Step S506 the relevant early warning data will be marked through a display device (such as an AR device), which is convenient for the user to clearly understand the related abnormality, and the items and locations where the abnormality occurs.
  • a display device such as an AR device
  • This embodiment can guide the user to form a regular habit of classifying and placing items. If the position of the item has a certain aggregation rule, after the mobile triggers the scan, if it deviates from the position aggregation center of the item, the user may be warned to indicate Abnormal situations may occur, including but not limited to items being stolen or being handled abnormally.
  • FIG. 6 is a schematic diagram of an implementation process of item query and output provided by an embodiment of the present invention. As shown in FIG. 6, the process of item orientation query may include the following steps S601 to S604.
  • Step S601 The user inputs a query through a display device (for example, VR glasses) or voice, and inputs an instruction to the camera control module 21.
  • a display device for example, VR glasses
  • voice inputs an instruction to the camera control module 21.
  • Step S602 The system preferentially queries the location record database, because the information in this database has been proved to be correct by practice.
  • Step S603 If there is no relevant record in the location record database, query from the location record candidate database.
  • the location record candidate library records the possible locations of the items.
  • Step S604 After querying related items and orientation information, it is output to the user. If the user wears a display device (such as an AR device), the display device (such as an AR device) is simulated and displayed.
  • a display device such as an AR device
  • the display device such as an AR device
  • the article image and position record database 23 includes the position record database and the position record candidate database.
  • some home appliances can be controlled by the remote control, but sometimes the remote control placement is irregular, or different users have different placement and usage habits, which often takes some time to find the remote control.
  • the following uses a commonly used remote controller as an example to give the flow of the remote controller display, which may include the following steps 1 to 11.
  • Step 1 Preset one or more remote control bitmap features, or download the related remote control bitmap feature library through the server.
  • Step 2 The user sets a scheduled scan in the camera monitoring system.
  • Step 3 After the set timing time arrives, all cameras perform global 360-degree scan adaptation (or panoramic scanning), or a camera detects the movement of items after infrared monitoring, and scans the items in the moving direction.
  • global 360-degree scan adaptation or panoramic scanning
  • Step 4 If the relevant remote control is scanned, the bearing is recorded.
  • Step 5 The user queries the remote control.
  • Step 6 The camera monitoring system preferentially queries the location record database, and if related records are queried, it is directly output to a display device (such as an AR device) and displayed visually. If not, query the location record candidate library.
  • a display device such as an AR device
  • Step 7 The user confirms whether the output orientation and items meet the requirements.
  • Step 8 If the requirements are met, the process ends. If it does not meet the requirements, you may need to supplement the information about the item you are querying.
  • Step 9 The camera monitoring system performs an aggregation operation on the remote controller's azimuth recording data when the background is idle, and records the aggregation rule if it is found.
  • Step 10 Once the displacement of the remote controller is detected by scanning or movement, and it does not conform to the azimuth aggregation rule, it will alarm through the VR device or AR device.
  • Step 11 The user places the remote control in its normal orientation.
  • the device may include: an environmental image obtaining module 71 (which may implement the function of the camera control module 21 of FIG. 2), The designated item search module 73 and the designated item prompt module 74 (the functions of the image synthesis module 24 of FIG. 2 can be realized).
  • the item prompting device further includes an item information obtaining module 72 (which can implement the functions of the image and position recognition module 22 of FIG. 2).
  • the environment image obtaining module 71 is configured to obtain an image of the current environment.
  • the environment image obtaining module 71 determines whether the trigger condition for camera activation is satisfied, and if the trigger condition for camera activation is satisfied, the camera deployed in the current environment is activated and scanned to obtain the An image of the current environment.
  • the trigger condition is a time condition, such as a timing trigger, a specific time trigger, a periodic trigger, and so on.
  • a camera is used for panoramic scanning (or omnidirectional scanning, 360 degree scanning).
  • the image of the current environment may be used to construct a background image, and the background image may be directly displayed by a display device, and the display device may be a VR device or an AR device.
  • the article information obtaining module 72 is configured to identify each article in the image of the current environment, obtain the image and orientation data of each article, and save the image and orientation data of each article.
  • the item information obtaining module 72 identifies the image of each item from the image of the current environment according to a preset item feature library, and determines that each identified item is relatively The position of other items is used as the orientation data of each item.
  • the item information obtaining module 72 may establish a relative spatial coordinate system according to the image of the current environment scanned by the camera, for example, take any point in the current environment as the origin of coordinates, and then determine The coordinate position of each item relative to the coordinate origin is used as the orientation data of each item.
  • the article feature library may be preset locally, or may be downloaded from a network-side server in advance, or may be obtained through self-learning.
  • the article feature library may be updated regularly.
  • the specified item search module 73 is configured to obtain the image and orientation data of the specified item from the stored image and orientation data of each item when a search request for the specified item is received.
  • the designated item prompting module 74 is configured to synthesize the image and orientation data of the designated item into the image of the current environment, and use a display device to prompt. .
  • the designated item prompting module 74 superimposes the image and the orientation data of the designated item to the corresponding position in the image of the current environment according to the orientation data of the designated item to obtain The image of the designated item position, and finally the display device is used to display the image that has prompted the designated item position.
  • a location record database and a location record candidate library may be preset, wherein the historical record data (or historical location) of each item may be recorded in the location record candidate library, and the location record database may record the user Approved common location data (or common location).
  • the display device includes but is not limited to an AR device. Of course, in other embodiments, it may be any display device capable of displaying the above image.
  • the article information obtaining module 72 may be further configured to detect whether there is a moving article in the environment, and if it is detected that there is a moving article in the current environment, obtain an image of the moving article And determine the position data of the moving item after it moves.
  • the item information obtaining module 72 may detect whether there are moving items in the current environment through infrared.
  • the device may further include:
  • the position data statistics module (not shown in FIG. 7) is configured to count the historical position data of each item, and perform big data aggregation operation processing on the historical position data of each item to determine each The regular placement position corresponding to each item, and then using the display device to display the regular placement position corresponding to each item.
  • the azimuth data statistics module may also be configured to count the number or frequency of placing each item at each corresponding normal placement position, and focus on the display number in the display device Large or frequent locations.
  • the device may further include:
  • the position abnormality warning module (not shown in FIG. 7) is configured to determine the respective conventional placement ranges corresponding to each item respectively when at least one of them is detected When the article is not placed in the conventional placement range, the display device is used to perform abnormality warning.
  • the position abnormality warning module is helpful for the user to find that the item is out of the normal placement range in time, and helps the user to form a habit of placing each of the items into the specified range.
  • FIG. 8 is a schematic structural block diagram of an item prompting device according to an embodiment of the present invention.
  • the device may include: a processor 81 and a memory 82, where the memory 82 stores An item prompting program running on the processor 81, when the item prompting program is executed by the processor 81, the steps of the item prompting method are implemented.
  • An embodiment of the present invention provides a computer-readable medium on which an item prompting program is stored, and when the item prompting program is executed by a processor, the steps of the item prompting method are implemented.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium for storing desired information and accessible by a computer.
  • the communication medium generally contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .
  • the embodiments of the present invention can intelligently prompt and display items and their position information through display devices (such as AR devices and VR devices), and can perform item aggregation rule recognition and position abnormality warning, and effectively and uniformly associate the home environment through a camera monitoring system In order to bring home safety, ease of use and other user experience to a new level.
  • display devices such as AR devices and VR devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种物品提示方法,其中,所述方法包括:获得当前环境的图像;当收到查找指定物品的查找请求时,获取所述指定物品的图像及方位数据;将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。本公开还提供了一种物品提示装置、设备及计算机可读介质。

Description

一种物品提示方法、装置、设备及计算机可读介质 技术领域
本发明实施例涉及但不限于一种物品提示方法、装置、设备及计算机可读介质。
背景技术
由于记忆具有遗忘性,视力受遮挡或受光线影响等具有局限性,因此在用户需要某一物品时,经常出现因遗忘或视线遮挡等情况导致找不到物品的问题。
发明内容
本发明实施例提供一种物品提示方法、装置、设备及计算机可读介质,解决物品查找困难的问题。
本发明实施例提供一种物品提示方法,所述方法包括:
获得当前环境的图像;
当收到查找指定物品的查找请求时,获取所述指定物品的图像及方位数据;
将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
本发明实施例提供一种物品提示装置,所述装置包括:
环境图像获得模块,配置为获得当前环境的图像;
指定物品查找模块,配置为当收到查找指定物品的查找请求时,获取所述指定物品的图像及方位数据;
指定物品提示模块,配置为将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
本发明实施例提供一种物品提示的设备,所述设备包括:处理器 和存储器,所述存储器上存储有可在所述处理器上运行的物品提示程序,所述物品提示程序被所述处理器执行时实现所述的物品提示方法的步骤。
本发明实施例提供一种计算机可读介质,其上存储有物品提示程序,所述物品提示的程序被处理器执行时实现所述的物品提示方法的步骤。
附图说明
图1a是本发明实施例提供的一种物品提示方法流程示意图。
图1b为本发明实施例提供的另一种物品提示方法流程示意图。
图2是本发明实施例提供的一种物品提示的模块架构示意性框图。
图3是本发明实施例提供的摄像头识别及记录的流程示意图。
图4是本发明实施例提供的物品位置记录及预警的实现流程示意图。
图5是本发明实施例提供的物品查询及输出的实现流程示意图。
图6是本发明实施例提供的实现增强现实提示及显示物品的方法流程框图。
图7是本发明实施例提供的一种物品提示装置的示意性结构框图。
图8是本发明实施例提供的一种物品提示设备的示意性结构框图。
具体实施方式
以下结合附图对本发明实施例进行详细说明,应当理解,以下所说明的实施例仅用于说明和解释本发明,并不用于限定本发明。本文中所使用的“包含”、“包括”、“具有”、“含有”等等,均为开 放性的用语,即意指包括但不限于。
本发明实施例可以在将摄像头扫描识别的物品及方位存储到数据库后,将增强现实(Augmented Reality,AR)设备或虚拟现实(Visual Reality,VR)设备的移动与数据库中物品的方位进行结合,模拟显示相关物品,进而实现提示用户物品的名称及相关图像信息,方便用户快速定位及控制相关室内物品(例如家居物品)的效果。
图1a是本发明实施例提供的一种物品提示方法流程示意图,如图1a所示,所述方法可以包括如下步骤。
步骤S101:获得当前环境的图像。
步骤S103:当收到查找指定物品的查找请求时,获取所述指定物品的图像及方位数据。
步骤S104:将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
本发明实施例将物品及放置物品的当前环境进行结合,通过显示设备提示和显示待查找物品及方位,便于用户查找物品,提升了用户体验。
图1b是本发明实施例提供的一种物品提示方法流程示意图,如图1b所示,所述方法可以包括如下步骤。
步骤S101:获得当前环境的图像。
在一些实施方式中,确定是否满足摄像头启动的触发条件,若满足所述摄像头启动的触发条件,则启动布设在所述当前环境中的摄像头,进行扫描,得到所述当前环境的图像。
在一些实施方式中,所述触发条件为时间条件,例如定时触发,特定时间触发,周期性触发等。
在一些实施方式中,布设在所述当前环境中的摄像头可以是一个或多个。
在一些实施方式中,利用摄像头进行全景扫描(或全方位扫描,360度扫描)。
在一些实施方式中,所述当前环境的图像可以用于构建背景影像,该背景影像可以由显示设备直接显示,所述显示设备可以是VR设备,也可以是AR设备。
步骤S102:对所述当前环境的图像中的每个物品进行识别,得到所述每个物品的图像及方位数据,并保存所述每个物品的图像及方位数据。
在一些实施方式中,根据预设的物品特征库,从所述当前环境的图像中识别出所述每个物品的图像,并确定已识别的所述每个物品相对其他物品的位置作为所述每个物品的方位数据。
在一些实施方式中,可以依据摄像头扫描到的所述当前环境的图像,建立相对的空间坐标系,例如将所述当前环境中的任意一点作为坐标原点,然后确定所述每个物品相对该坐标原点的坐标位置,作为所述每个物品的方位数据。
在本实施方式中,所述物品特征库可以是本地预置的,也可以是预先从网络侧服务器下载的,还可以是通过自学习得到的,该物品特征库可以定期更新。
步骤S103:当收到查找指定物品的查找请求时,从已保存的所述每个物品的图像及方位数据中获取所述指定物品的图像及方位数据。
步骤S104:将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
在一些实施方式中,按照所述指定物品的方位数据,将所述指定物品的图像及方位数据叠加到所述当前环境的图像中的对应位置,得到已提示所述指定物品位置的图像,最后利用所述显示设备,显示所述已提示所述指定物品位置的图像。
在一些实施方式中,可以预先设置位置记录数据库和位置记录候选库,其中,所述位置记录候选库中可以记录每个物品的历史方位数据(或历史位置),所述位置记录数据库可以记录用户认可的常用方位数据(或常用位置)。
在一些实施方式中,所述显示设备包括但不限于AR设备。当然,在其他实施方式中,也可以是任意能够显示上述图像的显示装置。
在以上实施方式的基础上,所述方法还可以包括:检测所述当前环境中是否存在移动的物品,若检测到所述当前环境中存在移动的物品,则获取所述移动的物品的图像,并确定所述移动的物品移动后的方位数据。
在一些实施方式中,可以通过红外检测所述当前环境中是否存在移动的物品。
进一步地,在以上实施方式的基础上,所述方法还可以包括:
统计所述每个物品的历史方位数据,并对所述每个物品的历史方位数据进行大数据聚合运算处理,确定所述每个物品各自所对应的常规放置位置,然后利用所述显示设备,显示所述每个物品的所述常规放置位置。
其中,一个物品可以对应一个常规放置位置或多个常规放置位置;不同物品所对应的常规放置位置可能相同,也能不同。
在一些实施方式中,针对每一个物品,可以统计该物品放置在其所对应的各常规放置位置的次数或频度,并在所述显示设备中,着重显示次数大或频度高的位置。
进一步地,在以上实施方式的基础上,所述方法还可以包括:
根据所述每个物品的常规放置位置,分别确定所述每个物品各自所对应的常规放置范围,当检测到所述至少一个物品未放置在对应的所述常规放置范围内时,利用所述显示设备,进行异常预警。
通过本实施方式,有利于用户及时发现物品脱离所述常规放置范围的情况,帮助用户形成将每个所述物品归置到指定范围的习惯。
本发明实施例解决了普通的人为监控及记录的局限,基于大数据及自动化,实现指定物品及方位的提示和显示,提升了用户查找物品的体验。
随着智能家居的普及,越来越多的家庭开始使用一个或多个监控 摄像头,有些监控摄像头还具有红外夜视及移动监控功能。本发明实施例可以通过摄像头识别出常用物品,并予以记录保存,从而解决了寻找物品困难的问题,例如用户晚间放置物品后导致的物品寻找困难的问题。当用户需要某一物品时,可以通过物品关键字(语音或文字等方式)去查询这些记录,引导用户快速的定位到物品的位置,大大方便用户,节省日常寻找物品耗费的时间,提升用户体验。更进一步地,本发明实施例通过将物品图像以VR或AR方式模拟显示,模拟显示时带有物品图像及方位等信息,解决用户查询及物品方位提示的痛点,提升用户体验。下面结合图2至图6,对本发明实施例进行详细说明。
图2是本发明实施例提供的一种物品提示的模块架构示意性框图,如图2所示,所述架构可以包括以下模块:
摄像头控制模块(或摄像头监控模块)21,配置为统筹管理一个或多个摄像头,即统一管理摄像头的监控,控制摄像头定期扫描以形成缓存背景,后续用于AR设备或VR设备的显示,摄像头的数据也可以直接实时传输到AR设备或VR设备上。
图像及方位识别模块22,配置为根据摄像头控制模块21扫描后的图像,识别物体并记录其对应的方位数据,将相关数据保存在网络硬盘上,以供后续AR影像或VR影像合成。
物品图像及方位记录数据库(或物品图像及方位记录模块)23,配置为存储已识别成功的物品图像及其方位,并对物品的方位历史数据进行记录,通过大数据聚合运算计算出物品的常用方位。根据其使用移动频率进行突出显示,突显了方法的智能性及便捷性。
影像合成模块24,配置为通过物品图像及方位数据库23提供的相关数据,生成用于供AR设备或VR设备显示的影像数据。
显示设备25,配置为基于影像合成模块24提供的影像数据显示已合成的AR影像或VR影像,方便用户形象的、有重点的查看图像及方位识别模块22中记录的相关物品。所述显示设备25包括但不限于AR设备、VR设备。
基于图2的各模块,工作过程可以包括如下步骤一~步骤四。
步骤一:根据用户需要,预置一些常用物品的图像特征数据(或物品特征或图像特征),后期用户也可以通过学习和校正,提取物品特征,或者通过用户接口,补充物品特征。
步骤二:摄像头控制模块21定时控制摄像头进行全方位扫描(或全景扫描),或者根据红外识别出有物体移动时对移动的物体进行扫描,并将扫描的图像及方位数据提供给物品图像及方位记录模块23。
步骤三:影像合成模块24(例如VR影像合成模块)基于物品图像及方位记录数据库23进行处理,方便后期在室内360度无死角的显示中标示出数据库中具体的物品及位置。
步骤四:用户佩戴显示设备25(例如VR设备)时,根据VR设备的位移、方位及朝向与物品图像及方位记录数据库23进行比较计算,实时更新VR设备的背景及相关物品的显示,这样用户可以方便的定位到相关物品。
图3是本发明实施例提供的实现AR提示及显示物品的流程框图,如图3所示,一方面,摄像头控制模块进行全景扫描,得到并缓存家居背景图像(或AR背景图像),同时将所述家居背景图像存储至网络硬盘。图像及方位识别模块根据物品特征,识别所述家居背景图像中的物品图像,确定已识别的物品图像的方位数据,并记录所述物品图像和方位数据,同时所述物品图像和方位数据也可以存储至所述网络硬盘中。影像合成模块根据从所述网络硬盘中获取所述家居背景图像、物品图像及方位数据,在所述家居背景图像中标示所述物品图像及位置,以供显示设备(例如AR设备或VR设备)进行显示。另一方面,还可以对摄像头控制模块进行全景扫描而得到的家居背景图像进行实时地图像处理(例如图像VR处理),并在处理后通过VR眼镜等VR设备进行显示。
图4是本发明实施例提供的摄像头识别及记录的流程示意图,如图4所示,所述流程可以包括如下步骤S401~步骤S406。
步骤S401:服务器预置常用物品的图像特征,形成物品图像特征 库,作为开源共享库,供不同用户下载并预置。
步骤S402:用户配置摄像头监控模块21后,从服务器下载常用物品的图像特征,并预置到本地摄像头控制系统(或摄像头监控系统、摄像头控制模块21)。
步骤S403:摄像头扫描并识别物品,由摄像头将图像数据传给“图像及方位识别模块22”完成。
步骤S404:记录识别出的物品,记录物品信息及其方位至物品图像及方位记录数据库23。
步骤S405:影像合成模块24(例如AR影像合成模块)提取摄像头拍摄场景及相关背景,构造AR影像。
步骤S406:影像合成模块24(例如AR影像合成模块)利用物品图像及方位记录数据库23的相关物品及方位信息,模拟构造AR图像中的模拟物品,形象的标识相关物品及位置。
图5是本发明实施例提供的物品位置记录及预警的实现流程示意图,如图5所示,所述流程可以包括如下步骤S501a~步骤S506。
步骤S501a:摄像头监控系统设置定时扫描周期,或指定扫描的具体时间,根据指定的时间进行扫描。
步骤S501b:摄像头监控系统也可以通过物品移动触发扫描。
步骤S502:记录扫描物品及其方位信息。
步骤S503:更新物品图像及方位记录数据库。
步骤S504:统计物品方位的聚合规律。
步骤S505:对不符合聚合规律的物品进行预警,有可能该物品没有按常规位置进行放置,也有可能该物品出现了异常情况。
应当说明的是,用户可以设置对指定物品进行异常预警处理。
步骤S506:相关预警数据会通过显示设备(例如AR设备)进行标示,方便用户清楚的了解相关异常,及出现异常的物品及位置。
本实施例可以引导用户形成有规律的分类归置物品的习惯,如果 物品的位置有一定的聚合规律,在移动触发扫描后,若脱离了该物品的位置聚合中心,可以向用户预警,指示出有可能出现异常情况,异常情况包括但不限于物品被盗或被异常处置。
图6是本发明实施例提供的物品查询及输出的实现流程示意图,如图6所示,物品方位查询的流程可以包括如下步骤S601~步骤S604。
步骤S601:用户通过显示设备(例如VR眼镜)或语音输入查询,将指令输入到摄像头控制模块21。
步骤S602:系统优先查询位置记录数据库,因为这个数据库的信息经过实践证明是正确的。
步骤S603:如果位置记录数据库暂时没有相关记录,再从位置记录侯选库中查询。
所述位置记录侯选库中记录物品可能存在的位置。
步骤S604:在查询到相关物品及方位信息后,输出给用户,若用户穿戴了显示设备(例如AR设备),则通显示设备(例如AR设备)模拟展示。
其中,物品图像及方位记录数据库23包括所述位置记录数据库和所述位置记录侯选库。
目前,一些家居设备可以通过遥控器控制,但是有时候遥控器放置位置不规律,或者不同用户的放置及使用习惯不同,导致经常要花一些时间来找遥控器,使用本发明实施例的方法后,可以很快的找到遥控器并督促用户养成定点放置遥控器的习惯。下面以常用的摇控器为例,给出遥控器显示的流程,可以包括如下步骤1~步骤11。
步骤1:预置一个或多个遥控器的位图特征,或者通过服务器下载相关的遥控器位图特征库。
步骤2:用户在摄像头监控系统设置定时扫描。
步骤3:在设置的定时时间到达后,所有的摄像头进行全局360度扫描适配(或全景扫描),或者某个摄像头通过红外监控到有物品移动后,扫描移动方位的物品。
步骤4:如果扫描到相关遥控器,则进行方位记录。
步骤5:用户查询遥控器。
步骤6:摄像头监控系统优先查询位置记录数据库,如果查询到相关记录,则直接输出到显示设备(例如AR设备)上,并形象的进行展示。如果没有,则查询位置记录侯选库。
步骤7:用户确认输出的方位及物品是否符合需求。
步骤8:如果符合需求,则流程结束。如果不符合需求,则可能需要补充所查询物品的相关信息。
步骤9:摄像头监控系统后台闲时对遥控器方位记录数据进行聚合运算,如果发现聚合规律,则予以记录。
步骤10:后续一旦扫描或移动检测到遥控器位移发生变化,不符合其方位聚合规律,则通过VR设备或AR设备告警。
步骤11:用户将遥控器归置到其常在方位。
图7是本发明实施例提供的一种物品提示装置的示意性结构框图,如图7所示,所述装置可以包括:环境图像获得模块71(可以实现图2摄像头控制模块21的功能)、指定物品查找模块73和指定物品提示模块74(可以实现图2影像合成模块24的功能)。
在一些实施例中,所述物品提示装置还包括物品信息获得模块72(可以实现图2图像及方位识别模块22的功能)。
其中,所述环境图像获得模块71,配置为获得当前环境的图像。
在一些实施方式中,所述环境图像获得模块71确定是否满足摄像头启动的触发条件,若满足所述摄像头启动的触发条件,则启动布设在所述当前环境中的摄像头,进行扫描,得到所述当前环境的图像。
在一些实施方式中,所述触发条件为时间条件,例如定时触发,特定时间触发,周期性触发等。
在一些实施方式中,布设在所述当前环境中的摄像头可以是一个或多个。
在一些实施方式中,利用摄像头进行全景扫描(或全方位扫描,360度扫描)。
在一些实施方式中,所述当前环境的图像可以用于构建背景影像,该背景影像可以由显示设备直接显示,所述显示设备可以是VR设备,也可以是AR设备。
所述物品信息获得模块72,配置为对所述当前环境的图像中的每个物品进行识别,得到所述每个物品的图像及方位数据,并保存所述每个物品的图像及方位数据。
在一些实施方式中,所述物品信息获得模块72根据预设的物品特征库,从所述当前环境的图像中识别出所述每个物品的图像,并确定已识别的所述每个物品相对其他物品的位置作为所述每个物品的方位数据。
在一些实施方式中,所述物品信息获得模块72可以依据摄像头扫描到的所述当前环境的图像,建立相对的空间坐标系,例如将所述当前环境中的任意一点作为坐标原点,然后确定所述每个物品相对该坐标原点的坐标位置,作为所述每个物品的方位数据。
在上述实施方式中,所述物品特征库可以是本地预置的,也可以是预先从网络侧服务器下载的,还可以是通过自学习得到的,该物品特征库可以定期更新。
所述指定物品查找模块73,配置为当收到指定物品的查找请求时,从已保存的所述每个物品的图像及方位数据中获取所述指定物品的图像及方位数据。
所述指定物品提示模块74,配置为将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。。
在一些实施方式中,所述指定物品提示模块74按照所述指定物品的方位数据,将所述指定物品的图像及方位数据叠加到所述当前环境的图像中的对应位置,得到已提示所述指定物品位置的图像,最后利用所述显示设备,显示已提示所述指定物品位置的图像。
在一些实施方式中,可以预先设置位置记录数据库和位置记录候选库,其中,所述位置记录候选库中可以记录每个物品的历史方位数据(或历史位置),所述位置记录数据库可以记录用户认可的常用方位数据(或常用位置)。
在一些实施方式中,所述显示设备包括但不限于AR设备。当然,在其他实施方式中,也可以是任意能够显示上述图像的显示设备。
在以上实施方式的基础上,所述物品信息获得模块72还可以配置为检测环境中是否存在移动的物品,若检测到所述当前环境中存在移动的物品,则获取所述移动的物品的图像,并确定所述移动的物品移动后的方位数据。
在本实施方式中,所述物品信息获得模块72可以通过红外检测所述当前环境中是否存在移动的物品。
进一步地,在以上实施方式的基础上,所述装置还可以包括:
方位数据统计模块(图7中未示出),配置为统计所述每个物品的历史方位数据,并对所述每个物品的历史方位数据分别进行大数据聚合运算处理,确定所述每个物品各自所对应的常规放置位置,然后利用所述显示设备,显示所述每个物品各自所对应的所述常规放置位置。
在本实施方式中,所述方位数据统计模块还可以配置为统计所述每个物品放置在各自所对应的每个常规放置位置的次数或频度,并在所述显示设备中,着重显示次数大或频度高的位置。
进一步地,在以上实施方式的基础上,所述装置还可以包括:
位置异常预警模块(图7中未示出),配置为根据所述每个物品的各自所对应的常规放置位置,分别确定所述每个物品各自所对应的常规放置范围,当检测到至少一个物品未放置在所述常规放置范围内时,利用所述显示设备,进行异常预警。
通过所述位置异常预警模块,有利于用户及时发现物品脱离所述常规放置范围的情况,帮助用户形成将每个所述物品归置到指定范围 的习惯。
图8是本发明实施例提供的一种物品提示设备的示意性结构框图,如图8所示,所述设备可以包括:处理器81和存储器82,所述存储器82上存储有可在所述处理器81上运行的物品提示程序,所述物品提示程序被所述处理器81执行时实现所述的物品提示方法的步骤。
本发明实施例提供一种计算机可读介质,其上存储有物品提示的程序,所述物品提示的程序被处理器执行时实现所述的物品提示的方法的步骤。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本发明实施例可以智能的通过显示设备(例如AR设备、VR设备)提示并显示物品及其方位信息,并可以进行物品聚合规律识别及位置异常预警,通过摄像头监控系统将家居环境有效的统一关联起来,从而将家居安全性,易用性等用户体验提到一个新的高度。
尽管上文对本发明进行了详细说明,但是本发明不限于此,本技术领域技术人员可以根据本发明的原理进行各种修改。因此,凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。

Claims (11)

  1. 一种物品提示方法,其中,所述方法包括:
    获得当前环境的图像;
    当收到查找指定物品的查找请求时,获取所述指定物品的图像及方位数据;
    将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
  2. 根据权利要求1所述的方法,其中,在所述获得当前环境的图像之后,且在所述获取所述指定物品的图像及方位数据之前,还包括:
    对所述当前环境的图像中的每个物品进行识别,得到所述每个物品的图像及方位数据,并保存所述每个物品的图像及方位数据;
    所述获取所述指定物品的图像及方位数据具体包括:
    从已保存的所述每个物品的图像及方位数据中获取所述指定物品的图像及方位数据。
  3. 根据权利要求1或2所述的方法,其中,在所述获得当前环境的图像之前,还包括:
    确定是否满足摄像头启动的触发条件;
    若满足所述摄像头启动的触发条件,则执行所述获得当前环境的图像的步骤;
    所述获得当前环境的图像包括:
    启动布设在所述当前环境中的摄像头,并利用已启动的所述摄像头进行扫描,得到所述当前环境的图像。
    将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
  4. 根据权利要求2所述的方法,其中,所述对所述当前环境的图像中的每个物品进行识别,得到所述每个物品的图像及方位数据包括:
    根据预设的物品特征库,从所述当前环境的图像中识别出所述每 个物品的图像,并确定已识别的所述每个物品相对其他物品的位置作为所述每个物品的方位数据。
  5. 根据权利要求2所述的方法,其中,所述方法还包括:
    检测所述当前环境中是否存在移动的物品;
    若检测到所述当前环境中存在移动的物品,则获取所述移动的物品的图像,并确定所述移动的物品移动后的方位数据。
  6. 根据权利要求1或2所述的方法,其中,所述将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示包括:
    按照所述指定物品的方位数据,将所述指定物品的图像及方位数据叠加到所述当前环境的图像中的对应位置,得到能够提示所述指定物品位置的图像;
    利用所述显示设备,显示所述能够提示所述指定物品位置的图像。
  7. 根据权利要求2所述的方法,其中,所述方法还包括:
    统计所述每个物品的历史方位数据;
    对所述每个物品的历史方位数据分别进行大数据聚合运算处理,确定所述每个物品各自所对应的常规放置位置;
    利用所述显示设备,显示所述每个物品各自所对应的所述常规放置位置。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    根据所述每个物品的常规放置位置,分别确定所述每个物品各自所对应的常规放置范围;
    当检测到至少一个物品未放置在所述常规放置范围内时,利用所述显示设备,进行异常预警。
  9. 一种物品提示装置,其中,所述装置包括:
    环境图像获得模块,配置为获得当前环境的图像;
    指定物品查找模块,配置为当收到查找指定物品的查找请求时, 获取所述指定物品的图像及方位数据;
    指定物品提示模块,配置为将所述指定物品的图像及方位数据合成到所述当前环境的图像中,并利用显示设备进行提示。
  10. 一种物品提示设备,所述设备包括:处理器和存储器,其中,所述存储器上存储有可在所述处理器上运行的物品提示程序,所述物品提示程序被所述处理器执行时实现如权利要求1至8中任一项所述的物品提示方法的步骤。
  11. 一种计算机可读介质,其中,其上存储有物品提示程序,所述物品提示的程序被处理器执行时实现如权利要求1至8中任一项所述的物品提示方法的步骤。
PCT/CN2019/121974 2018-11-30 2019-11-29 一种物品提示方法、装置、设备及计算机可读介质 WO2020108618A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811451964.2 2018-11-30
CN201811451964.2A CN111259091A (zh) 2018-11-30 2018-11-30 一种物品提示方法、装置、设备及计算机可读介质

Publications (1)

Publication Number Publication Date
WO2020108618A1 true WO2020108618A1 (zh) 2020-06-04

Family

ID=70852445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121974 WO2020108618A1 (zh) 2018-11-30 2019-11-29 一种物品提示方法、装置、设备及计算机可读介质

Country Status (2)

Country Link
CN (1) CN111259091A (zh)
WO (1) WO2020108618A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI826129B (zh) * 2022-11-18 2023-12-11 英業達股份有限公司 週期時間偵測及修正系統與方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205899559U (zh) * 2016-05-18 2017-01-18 北大方正集团有限公司 一种查找物品的装置
CN106650570A (zh) * 2016-09-06 2017-05-10 深圳市金立通信设备有限公司 一种查找物品的方法及终端
US20180101810A1 (en) * 2016-10-12 2018-04-12 Cainiao Smart Logistics Holding Limited Method and system for providing information of stored object
CN108256576A (zh) * 2017-07-18 2018-07-06 刘奕霖 物品显示方法、装置、存储介质和处理器

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010030392A2 (en) * 2008-09-12 2010-03-18 Dimitris Achlioptas Interpersonal spacetime interaction system
US10386191B2 (en) * 2015-09-22 2019-08-20 Clever Devices Ltd. Synthetic data collection for vehicle controller
CN106920079B (zh) * 2016-12-13 2020-06-30 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
US10467578B2 (en) * 2017-05-08 2019-11-05 Wing Aviation Llc Methods and systems for requesting and displaying UAV information
CN107293098A (zh) * 2017-06-20 2017-10-24 北京京东尚科信息技术有限公司 过期提醒的方法、装置、终端和计算机可读介质
CN108596052B (zh) * 2018-04-09 2020-10-02 Oppo广东移动通信有限公司 一种物品查找方法、系统及终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205899559U (zh) * 2016-05-18 2017-01-18 北大方正集团有限公司 一种查找物品的装置
CN106650570A (zh) * 2016-09-06 2017-05-10 深圳市金立通信设备有限公司 一种查找物品的方法及终端
US20180101810A1 (en) * 2016-10-12 2018-04-12 Cainiao Smart Logistics Holding Limited Method and system for providing information of stored object
CN108256576A (zh) * 2017-07-18 2018-07-06 刘奕霖 物品显示方法、装置、存储介质和处理器

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI826129B (zh) * 2022-11-18 2023-12-11 英業達股份有限公司 週期時間偵測及修正系統與方法

Also Published As

Publication number Publication date
CN111259091A (zh) 2020-06-09

Similar Documents

Publication Publication Date Title
CN106598071B (zh) 跟随式的飞行控制方法及装置、无人机
EP2775374B1 (en) User interface and method
US9953506B2 (en) Alarming method and device
US11145130B2 (en) Method for automatically capturing data from non-networked production equipment
CN107818180B (zh) 视频关联方法、视频显示方法、装置及存储介质
US9205886B1 (en) Systems and methods for inventorying objects
CN108234918B (zh) 具有隐私意识的室内无人机的勘探和通讯架构方法和系统
US10115019B2 (en) Video categorization method and apparatus, and storage medium
JP6326552B2 (ja) 物体識別方法、装置、プログラム及び記録媒体
US10733799B2 (en) Augmented reality sensor
US20220114882A1 (en) Discovery Of And Connection To Remote Devices
CN111582038A (zh) 查找物品方法、装置、存储介质及移动机器人
US10855728B2 (en) Systems and methods for directly accessing video data streams and data between devices in a video surveillance system
US20240029372A1 (en) Method for automatically capturing data from non-networked production equipment
CN114529621B (zh) 户型图生成方法、装置、电子设备及介质
WO2020108618A1 (zh) 一种物品提示方法、装置、设备及计算机可读介质
US10215858B1 (en) Detection of rigid shaped objects
US11574502B2 (en) Method and device for identifying face, and computer-readable storage medium
US20230169738A1 (en) Building data platform with augmented reality based digital twins
US9904355B2 (en) Display method, image capturing method and electronic device
US11364637B2 (en) Intelligent object tracking
JP6352874B2 (ja) ウェアラブル端末、方法及びシステム
CN109194920B (zh) 一种基于高清摄像机的智能寻物方法
US20170195559A1 (en) Information processing device, imaging device, imaging system, method of controlling information processing device and program
CN112905133A (zh) 用于显示控制的方法及装置、终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19890126

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 02/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19890126

Country of ref document: EP

Kind code of ref document: A1