WO2018120033A1 - Method and device for assisting user in finding object - Google Patents

Method and device for assisting user in finding object Download PDF

Info

Publication number
WO2018120033A1
WO2018120033A1 PCT/CN2016/113534 CN2016113534W WO2018120033A1 WO 2018120033 A1 WO2018120033 A1 WO 2018120033A1 CN 2016113534 W CN2016113534 W CN 2016113534W WO 2018120033 A1 WO2018120033 A1 WO 2018120033A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
target object
virtual space
dimensional spatial
updated
Prior art date
Application number
PCT/CN2016/113534
Other languages
French (fr)
Chinese (zh)
Inventor
南一冰
廉士国
李强
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2016/113534 priority Critical patent/WO2018120033A1/en
Priority to CN201680007027.0A priority patent/CN107278301B/en
Publication of WO2018120033A1 publication Critical patent/WO2018120033A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries

Definitions

  • the present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for assisting a user in searching for objects.
  • the prior art provides a visual alternative method based on cognition and target recognition, which obtains a scene image by shooting the current scene of the blind person, and directly detects all objects appearing in the scene image. And only the information of all objects in the scene image is provided to the blind person, and the blind person makes a decision on the basis of the obtained information to find the target object.
  • This method has limited help for blind people in the process of searching for objects. It is impossible to guide blind people in the process of finding people in the blind, so it is not conducive to blind people to find the required objects in a timely and effective manner.
  • the embodiment of the present application provides a method and an apparatus for assisting a user to find an object, and mainly solves the problem that the existing technology cannot effectively help a blind person to find a desired object.
  • the present application provides a method for assisting a user in searching for objects, including: determining a target object and an initial three-dimensional spatial position of the target object relative to the user, the target object being an object to be sought by a user; generating and The initial virtual space sound corresponding to the initial three-dimensional spatial position; in the user's object-seeking process, real-time updating the three-dimensional spatial position of the target object relative to the user, and generating a new virtual corresponding to the updated three-dimensional spatial position Space sound.
  • the present application provides an apparatus for assisting a user in searching for objects, including: a detecting unit, configured to determine a target object, the target object is an object to be searched by a user, and a position determining unit configured to determine the detecting unit to detect An initial three-dimensional spatial position of the target object with respect to the user; a virtual space sound generating unit for generating and An initial virtual space sound corresponding to the initial three-dimensional spatial position determined by the position determining unit; the position determining unit is further configured to update a three-dimensional spatial position of the target object relative to the user in real time during a user object searching process; The virtual space sound generating unit is further configured to generate a new virtual space sound corresponding to the updated three-dimensional space position determined by the position determining unit.
  • the present application provides an electronic device, including: a memory, a communication interface, and a processor, the memory is configured to store computer executable code, and the processor is configured to execute the computer executable code control to execute the auxiliary user a method of searching for the data transmission of the electronic device and an external device.
  • the present application provides a robot, including the above electronic device.
  • the present application provides a computer storage medium for storing computer software instructions, including program code designed to perform the above-described method of assisting a user to find objects.
  • the present application provides a computer program product that can be directly loaded into an internal memory of a computer and includes software code, and the software code can be loaded and executed by a computer to implement the above-described method for assisting a user to find objects.
  • the method and apparatus for assisting a user to find objects firstly determining an object to be sought by a user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; generating an initial corresponding to the initial three-dimensional spatial position
  • the virtual space sound; in the user's object searching process, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional space position is generated.
  • only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object.
  • the position of the target object relative to the user is converted into a virtual space.
  • the spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously.
  • the position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
  • FIG. 1 is a schematic diagram of an auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of still another method for assisting a user to find objects according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device for assisting the user to find objects is referred to as an auxiliary user object-seeking device in the embodiment of the present application.
  • the device for assisting the user to find the object may be set in the smart helmet, and the user may wear the helmet to implement the embodiment of the present application.
  • the device for assisting the user to find objects can also be integrated in the mobile guide robot, and the mobile guide robot can perform the method described in the embodiments of the present application to assist the blind user to find objects.
  • the auxiliary user object-seeking device can also be configured as other wearable devices, which is not limited in this embodiment of the present application.
  • the above-mentioned auxiliary user object-seeking device can implement the method provided by the embodiment of the present application to implement auxiliary user searching.
  • the embodiment of the present application provides a method for assisting a user in searching for objects. As shown in FIG. 2, the method includes:
  • Step 101 Determine a target object.
  • the target object is an object to be searched by a user.
  • the user may input, to the auxiliary user object searching device, prompt information related to the object to be obtained, where the prompt information may be the name of the object to be obtained or the object to be searched for by the user.
  • the auxiliary user object-seeking device can receive the prompt information input by the user, and perform the keyword extraction and the like analysis to determine the object that the user wants to find; and then assist the user to search for the device and then corresponding to the scene of the pre-stored user.
  • Target detection is performed in the panoramic image to determine the target object.
  • the manner in which the user inputs the prompt information may be voice input; the manner of inputting the user may also be a key input.
  • the object category corresponding to some key combinations may be predefined, and the user selects the desired one by pressing a button.
  • the panoramic image can be obtained by the image capturing device after the scene where the user is located, and sent by the image collecting device to the auxiliary user searching device.
  • the image capturing device can use a panoramic camera, and the panoramic camera can detect not only objects in the user's field of view, such as objects in front of the user, but also objects outside the user's field of view, such as behind the user. object.
  • the target detection (also known as object detection) technique can detect a two-dimensional position of a given category of objects contained in the panoramic image, including two-dimensional coordinates and the width and height of the object.
  • Step 102 Determine an initial three-dimensional spatial position of the target object with respect to the user.
  • the depth sensor can obtain depth information of the detected scene. , the distance between each object and the sensor in the scene.
  • the depth sensor may be a stereo vision sensor such as binocular or a laser scanning radar. The specific implementation of the depth sensor can refer to the prior art, and details are not described herein again.
  • Step 103 Generate an initial virtual space sound corresponding to the initial three-dimensional spatial position.
  • the virtual space sound can simulate the transfer function between the sound source and the two ears according to the perceptual characteristics of the human ear to the sound signal to reconstruct the complex three-dimensional virtual space sound field.
  • the virtual space sound technology reference may be made to the prior art, and details are not repeatedly described in the embodiments of the present application.
  • the user can use the virtual space sound generated in this step to obtain a prompt of the location of the target object.
  • Step 104 Update the three-dimensional spatial position of the target object relative to the user in real time during the user's object searching process.
  • Step 105 Generate a new virtual space sound corresponding to the updated three-dimensional space position.
  • the auxiliary user object-seeking device continuously tracks the current position of the user and continuously generates a new virtual space sound to continuously give the user the latest prompt.
  • the method for assisting a user to find objects by the present application first determines an object to be sought by the user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; and generates an initial virtual space corresponding to the initial three-dimensional spatial position. Sound; in the process of searching for objects, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated.
  • a target object in the present application an object to be sought by the user
  • an initial virtual space corresponding to the initial three-dimensional spatial position Sound
  • the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated.
  • only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object.
  • the position of the target object relative to the user is converted into a virtual space.
  • the spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously.
  • the position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
  • step 101 when the target object is determined according to step 101, if a plurality of candidate target objects are detected according to the prompt information input by the user, prompt information is sent to the user for prompting the user to detect the plurality of candidate target objects from the detected target objects. Determine the final target object.
  • the prompt information may prompt the user to input more keywords.
  • information for each candidate target object detected may be provided to the user so that the user inputs the prompt information again to determine the final target object.
  • the target object that the user wants to find is a cup, and actually detects a plurality of cups such as a coffee cup, a red mug, etc., and then issues a reminder to prompt the user to input more detailed prompt information, such as color, function, etc. Then determine the final target object, such as a red mug, based on the user's response.
  • the sound category of the virtual space sound may also be determined according to the type of the target object, so that the sound category and the sound of the virtual space sound are The type of the target object corresponds.
  • the sound of the virtual space sound can be set as a flowing water.
  • the virtual space sound can be set as the whistle sound; when the object to be searched by the user is the mobile phone, the virtual space can be set.
  • the virtual space sound may also be the name of the target object.
  • the virtual space sound can be set to continuously repeat the "car" pronunciation.
  • the tracking target technology is used to lock the target object after the target object is determined. Therefore, after the step 101 “determining the target object”, as shown in FIG. 3, the method further includes:
  • the tracking target object For the specific implementation of the tracking target object, reference may be made to the target tracking technology in the prior art, such as a tracking technology based on computer vision technology, and details are not described herein.
  • One available method for detecting a target object is to continuously perform target detection in real time during the user's object-seeking process to continuously determine the target object.
  • this method may bring the following disadvantages: during a certain detection process, the detected target object is different from the initially detected target object, or a new target object is detected.
  • the role of the "locking" target object can be achieved, and the uniqueness of the target object during the object searching process can be ensured.
  • the real-time updating of the three-dimensional spatial position of the target object relative to the user in the process of the user fetching includes:
  • the virtual space sound generally reflects the positional relationship and cannot reflect the distance relationship.
  • the virtual space sound can only prompt the user that the target object is located in front, but cannot reflect the distance between the target object and the user. Therefore, in order to better prompt the user of the distance from the target object, as shown in FIG. 4, the step 104 “updates the three-dimensional spatial position of the target object relative to the current position of the user in real time during the user's object searching process”, after that, The method further includes:
  • Step 301 Detect whether the updated three-dimensional spatial location is closer to the user than the updated three-dimensional spatial location.
  • step 302 may be performed when performing the step 105 “generating a new virtual space sound corresponding to the updated three-dimensional space position”.
  • Step 302 Generate a new virtual space sound corresponding to the updated three-dimensional spatial position, and the frequency of the new virtual space sound is greater than the virtual space sound corresponding to the three-dimensional spatial position before the update.
  • the user may be prompted to be closer to the target object by increasing the volume of the virtual space sound;
  • the volume of the sound prompts the user to move away from the target object.
  • the method further includes:
  • Step 401 When the distance between the target object and the user is less than a preset threshold, the voice prompts the user to guide the user to gradually approach the target object.
  • the preset threshold may be set according to actual needs.
  • the target object when the target object is very close to the user, for example, located on the left side of the user and reachable to the user, the user can be voiced to be located on the left side of the user.
  • This prompting method does not require a complicated process of generating a virtual space sound, and is simple and effective.
  • the target object may move.
  • the field of view of the image detecting device causes the tracking to fail; or, in some cases, the target object is occluded, which may cause interference to the object or cause the object to fail.
  • the user should be promptly given a corresponding reminder to inform the user to adjust the location of the user, etc., and the system automatically restarts the method from step 101, if after a preset time or after multiple adjustments, If the target object cannot be detected, the user can be prompted to end the search.
  • the above method provided by the embodiment of the present application can assist any user to find objects, for example, assisting a blind user to find objects, or an ordinary user wearing a helmet capable of implementing the above method to perform a game of searching or the like.
  • auxiliary user object-seeking device includes corresponding hardware structures and/or software modules for executing the respective functions in order to implement the above functions.
  • present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application may divide the function module of the auxiliary user object-seeking device or the like according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 6 is a schematic diagram showing a possible structure of the auxiliary user object-seeking device involved in the above embodiment, and the auxiliary user object-seeking device includes: a detecting unit 501, a position, in a case where each function module is divided by a corresponding function.
  • the detecting unit 501 is configured to support the auxiliary user searching device to perform the process 101 in FIG.
  • the position determining unit 502 is configured to support the auxiliary user searching device to perform step 102, step 104, step 202, step 203, step 301, step 302, and Step 401:
  • the virtual space sound generating unit 503 is configured to support the auxiliary user object searching device to perform step 103, step 105, and step 303.
  • the auxiliary user object-seeking device involved in the foregoing embodiment further includes The tracking unit 601 is configured to support the auxiliary user object searching device to perform step 201. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 8 shows a possible structural diagram involved in the above embodiment.
  • the auxiliary user object finding device includes a processing module 701 and a communication module 702.
  • the processing module 701 is configured to control and manage the actions of the auxiliary user object searching device.
  • the processing module 701 is configured to support the auxiliary user object searching device to perform the processes 101 to 105 in FIG. 2, and the processes 201 to 204 in FIG. 3, FIG. Processes 301 through 303, process 401 in FIG. 5, and/or other processes for the techniques described herein.
  • the communication module 702 is configured to support communication between the auxiliary user object-seeking device and other network entities, such as with the functional modules or network entities shown in FIG.
  • the auxiliary user object-seeking device may further include a storage module 703 for storing program codes and data of the auxiliary user-seeking device.
  • the processing module 701 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 702 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 703 can be a memory.
  • the processing module 701 is a processor
  • the communication module 702 is a communication interface
  • the storage module 703 is a memory
  • the auxiliary user object-seeking device involved in the embodiment of the present application may be the electronic device shown in FIG.
  • the electronic device includes a processor 801, a communication interface 802, a memory 803, and a bus 804.
  • the processor 801, the communication interface 802, and the memory 803 are connected to each other through a bus 804.
  • the bus 804 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. Wait.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For the sake of convenience, only one thick line is shown in Figure 9, but it does not mean There is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and device for assisting a user in finding an object, which relate to the technical field of artificial intelligence, and mainly solve the problem in the existing technology wherein a method for effectively assisting a blind person in finding a desired object does not exist. The method for assisting a user in finding an object comprises: determining a target object (101), determining an initial three-dimensional space position of the target object with respect to a user (102), the target object being an object to be found by the user; generating an initial virtual space sound which corresponds to the initial three-dimensional space position (103); during a process in which the user searches for an object, updating a three-dimensional space position of the target object relative to the user in real time (104); generating a new virtual space sound which corresponds to the updated three-dimensional space position (105). Said method is applied during a process in which a blind person is searching for an object.

Description

一种辅助用户寻物的方法及装置Method and device for assisting user to find object 技术领域Technical field
本申请涉及人工智能技术领域,尤其涉及一种辅助用户寻物的方法及装置。The present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for assisting a user in searching for objects.
背景技术Background technique
实际生活中,盲人存在寻找特定物体的需求。为了辅助盲人寻物,现有技术中提供了一种基于认知和目标辨识的视觉替代方法,该方法通过对盲人当前所在场景进行拍摄得到场景图像,并直接检测出场景图像中出现的所有物体,并仅把该场景图像中所有物体的信息均提供给盲人,由盲人根据所得到的信息自行进行决策后寻找目标物体。这种方法对盲人寻物过程中提供的帮助有限,在盲人寻物过程中无法对盲人进行指导,因此不利于盲人及时有效的寻找所需物体。In real life, blind people have a need to find a specific object. In order to assist the blind person to find things, the prior art provides a visual alternative method based on cognition and target recognition, which obtains a scene image by shooting the current scene of the blind person, and directly detects all objects appearing in the scene image. And only the information of all objects in the scene image is provided to the blind person, and the blind person makes a decision on the basis of the obtained information to find the target object. This method has limited help for blind people in the process of searching for objects. It is impossible to guide blind people in the process of finding people in the blind, so it is not conducive to blind people to find the required objects in a timely and effective manner.
发明内容Summary of the invention
本申请的实施例提供一种辅助用户寻物的方法及装置,主要解决现有技术中存在的无法有效的帮助盲人寻找所需物体的问题。The embodiment of the present application provides a method and an apparatus for assisting a user to find an object, and mainly solves the problem that the existing technology cannot effectively help a blind person to find a desired object.
为达到上述目的,本申请的实施例采用如下技术方案:To achieve the above objective, the embodiment of the present application adopts the following technical solutions:
第一方面,本申请提供一种辅助用户寻物的方法,包括:确定目标物体以及所述目标物体相对于所述用户的初始三维空间位置,所述目标物体为用户待寻找的物体;生成与所述初始三维空间位置对应的初始虚拟空间声;在用户寻物过程中,实时更新所述目标物体相对于所述用户的三维空间位置,并生成与更新后的三维空间位置对应的新的虚拟空间声。In a first aspect, the present application provides a method for assisting a user in searching for objects, including: determining a target object and an initial three-dimensional spatial position of the target object relative to the user, the target object being an object to be sought by a user; generating and The initial virtual space sound corresponding to the initial three-dimensional spatial position; in the user's object-seeking process, real-time updating the three-dimensional spatial position of the target object relative to the user, and generating a new virtual corresponding to the updated three-dimensional spatial position Space sound.
第二方面,本申请提供一种辅助用户寻物的装置,包括:检测单元,用于确定目标物体,所述目标物体为用户待寻找的物体;位置确定单元,用于确定所述检测单元检测到的目标物体相对于所述用户的初始三维空间位置;虚拟空间声生成单元,用于生成与所述 位置确定单元确定的所述初始三维空间位置对应的初始虚拟空间声;所述位置确定单元,还用于在用户寻物过程中,实时更新所述目标物体相对于所述用户的三维空间位置;所述虚拟空间声生成单元,还用于生成与所述位置确定单元确定的更新后的三维空间位置对应的新的虚拟空间声。In a second aspect, the present application provides an apparatus for assisting a user in searching for objects, including: a detecting unit, configured to determine a target object, the target object is an object to be searched by a user, and a position determining unit configured to determine the detecting unit to detect An initial three-dimensional spatial position of the target object with respect to the user; a virtual space sound generating unit for generating and An initial virtual space sound corresponding to the initial three-dimensional spatial position determined by the position determining unit; the position determining unit is further configured to update a three-dimensional spatial position of the target object relative to the user in real time during a user object searching process; The virtual space sound generating unit is further configured to generate a new virtual space sound corresponding to the updated three-dimensional space position determined by the position determining unit.
第三方面,本申请提供一种电子设备,包括:存储器、通信接口和处理器,所述存储器用于存储计算机可执行代码,所述处理器用于执行所述计算机可执行代码控制执行上述辅助用户寻物方法,所述通信接口用于所述电子设备与外部设备的数据传输。In a third aspect, the present application provides an electronic device, including: a memory, a communication interface, and a processor, the memory is configured to store computer executable code, and the processor is configured to execute the computer executable code control to execute the auxiliary user a method of searching for the data transmission of the electronic device and an external device.
第四方面,本申请提供一种机器人,包括上述电子设备。In a fourth aspect, the present application provides a robot, including the above electronic device.
第五方面,本申请提供一种计算机存储介质,用于储存计算机软件指令,其包含执行上述辅助用户寻物的方法所设计的程序代码。In a fifth aspect, the present application provides a computer storage medium for storing computer software instructions, including program code designed to perform the above-described method of assisting a user to find objects.
第六方面,本申请提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,所述软件代码经由计算机载入并执行后能够实现上述辅助用户寻物的方法。In a sixth aspect, the present application provides a computer program product that can be directly loaded into an internal memory of a computer and includes software code, and the software code can be loaded and executed by a computer to implement the above-described method for assisting a user to find objects.
本申请提供的辅助用户寻物的方法及装置,首先确定用户待寻找的物体(本申请称之为目标物体)以及目标物体相对于用户的初始三维空间位置;生成与初始三维空间位置对应的初始虚拟空间声;在用户寻物过程中,再实时更新目标物体相对于用户当前位置的三维空间位置,并生成与更新后的三维空间位置对应的新的虚拟空间声。与现有技术中,仅在初始时为用户提供其所在场景中所有物体的信息,并由用户自行决策以及寻物相比,本申请中通过将目标物体相对于用户的三维空间位置转化为虚拟空间声,将目标物体的视觉位置信息转化为声音信息,使得盲人能够根据该虚拟空间声判断目标物体的位置;且在盲人寻物过程中,随着盲人的位置不断变化,目标物体相对于盲人的三维空间位置也不断变化,基于变化的三维空间位置,实时更新虚拟空间声,有助于盲人准确判断目标物体的位置,进而及时有效的帮助盲人寻物。The method and apparatus for assisting a user to find objects, firstly determining an object to be sought by a user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; generating an initial corresponding to the initial three-dimensional spatial position The virtual space sound; in the user's object searching process, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional space position is generated. In the prior art, only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object. In the present application, the position of the target object relative to the user is converted into a virtual space. The spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously. The position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
附图说明 DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings to be used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present application, and other drawings can be obtained according to the drawings without any creative work for those skilled in the art.
图1为本申请实施例提供的辅助用户寻物设备的一种示意图;1 is a schematic diagram of an auxiliary user object-seeking device according to an embodiment of the present application;
图2为本申请实施例提供的一种辅助用户寻物的方法的流程示意图;2 is a schematic flowchart of a method for assisting a user in searching for objects according to an embodiment of the present application;
图3为本申请实施例提供的另一种辅助用户寻物的方法的流程示意图;FIG. 3 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application;
图4为本申请实施例提供的又一种辅助用户寻物的方法的流程示意图;FIG. 4 is a schematic flowchart diagram of another method for assisting a user in searching for objects according to an embodiment of the present application;
图5为本申请实施例提供的再一种辅助用户寻物方法的流程示意图;FIG. 5 is a schematic flowchart of still another method for assisting a user to find objects according to an embodiment of the present application;
图6为本申请实施例提供的一种辅助用户寻物设备的结构示意图;FIG. 6 is a schematic structural diagram of an auxiliary user object-seeking device according to an embodiment of the present application;
图7为本申请实施例提供的另一种辅助用户寻物设备的结构示意图;FIG. 7 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application;
图8为本申请实施例提供的又一种辅助用户寻物设备的结构示意图;FIG. 8 is a schematic structural diagram of another auxiliary user object-seeking device according to an embodiment of the present application;
图9为本申请实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式detailed description
本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。The system architecture and the service scenario described in the embodiments of the present application are for the purpose of more clearly explaining the technical solutions of the embodiments of the present application, and do not constitute a limitation of the technical solutions provided by the embodiments of the present application. The technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性 的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in the embodiments of the present application, the words "exemplary" or "such as" are used to mean an example, an illustration, or a description. It is described as "exemplary" in the embodiment of the present application. Any embodiment or design of "or", for example, should not be construed as preferred or advantageous over other embodiments or designs. In particular, the use of the words "exemplary" or "for example" Present relevant concepts in a concrete manner.
需要说明的是,本申请实施例中,“的(英文:of)”,“相应的(英文:corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。It should be noted that, in the embodiment of the present application, "(English: of)", "corresponding (relevant)" and "corresponding" may sometimes be mixed. It should be noted that When the difference is not emphasized, the meaning to be expressed is the same.
用户,尤其是盲人用户可利用电子设备辅助寻物。为了便于描述,本申请实施例中将该用于辅助用户寻物的电子设备称为辅助用户寻物设备。在所述辅助用户寻物设备的一种实现方式中,如图1所示,该辅助用户寻物的设备可以设置为智能头盔中,则用户可以佩戴该头盔,以实现本申请实施例所述的方法。该辅助用户寻物的设备还可以集成设置在移动导盲机器人中,该移动导盲机器人能够执行本申请实施例所述的方法以辅助盲人用户寻物。该辅助用户寻物设备还可以设置为其他可穿戴设备,本申请实施例对此不作限定。Users, especially blind users, can use electronic devices to assist in the search for objects. For convenience of description, the electronic device for assisting the user to find objects is referred to as an auxiliary user object-seeking device in the embodiment of the present application. In an implementation manner of the auxiliary user object-seeking device, as shown in FIG. 1 , the device for assisting the user to find the object may be set in the smart helmet, and the user may wear the helmet to implement the embodiment of the present application. Methods. The device for assisting the user to find objects can also be integrated in the mobile guide robot, and the mobile guide robot can perform the method described in the embodiments of the present application to assist the blind user to find objects. The auxiliary user object-seeking device can also be configured as other wearable devices, which is not limited in this embodiment of the present application.
上述辅助用户寻物设备可通过执行本申请实施例提供的方法以实现辅助用户寻物。本申请实施例提供一种辅助用户寻物的方法,如图2所示,该方法包括:The above-mentioned auxiliary user object-seeking device can implement the method provided by the embodiment of the present application to implement auxiliary user searching. The embodiment of the present application provides a method for assisting a user in searching for objects. As shown in FIG. 2, the method includes:
步骤101、确定目标物体。Step 101: Determine a target object.
其中,所述目标物体为用户待寻找的物体。Wherein, the target object is an object to be searched by a user.
可选的,在本步骤的一种实现方式中,用户可向辅助用户寻物设备输入与待获寻物体相关的提示信息,该提示信息可以为待获寻物体的名称或者用户对待获寻物体的特征的描述;辅助用户寻物设备可接收用户输入的提示信息,并对提示信息进行关键词提取等分析后确定用户想要寻找的物体;然后辅助用户寻物设备再在预存用户所在场景对应的全景图像中进行目标检测以确定所述目标物体。Optionally, in an implementation manner of the step, the user may input, to the auxiliary user object searching device, prompt information related to the object to be obtained, where the prompt information may be the name of the object to be obtained or the object to be searched for by the user. The description of the feature; the auxiliary user object-seeking device can receive the prompt information input by the user, and perform the keyword extraction and the like analysis to determine the object that the user wants to find; and then assist the user to search for the device and then corresponding to the scene of the pre-stored user. Target detection is performed in the panoramic image to determine the target object.
其中,用户输入提示信息的方式可以为语音输入;用户输入的方式还可以为按键输入,在这种输入方式中,可以预定义一些按键组合对应的物体类别,用户通过按键来选择想要找的目标物体。所 述全景图像可以通过图像采集设备对用户所处场景进行拍摄后得到,并由图像采集设备发送给辅助用户寻物设备。在本申请实施例中,该图像采集设备可以使用全景相机,全景相机不仅可以检测到用户视场内的物体,如用户前方的物体,还可以检测到用户视场以外的物体,如用户背后的物体。所述目标检测(又称为物体检测)技术可以检测出全景图像中包含的给定类别物体的二维位置,包括二维坐标和物体的宽度、高度。The manner in which the user inputs the prompt information may be voice input; the manner of inputting the user may also be a key input. In this input mode, the object category corresponding to some key combinations may be predefined, and the user selects the desired one by pressing a button. Target object. Place The panoramic image can be obtained by the image capturing device after the scene where the user is located, and sent by the image collecting device to the auxiliary user searching device. In the embodiment of the present application, the image capturing device can use a panoramic camera, and the panoramic camera can detect not only objects in the user's field of view, such as objects in front of the user, but also objects outside the user's field of view, such as behind the user. object. The target detection (also known as object detection) technique can detect a two-dimensional position of a given category of objects contained in the panoramic image, including two-dimensional coordinates and the width and height of the object.
步骤102、确定目标物体相对于用户的初始三维空间位置。Step 102: Determine an initial three-dimensional spatial position of the target object with respect to the user.
如上所述,在进行目标检测的同时,还可以得到目标物体的二维位置信息,如果想要确定目标物体的三维空间位置,则还需要结合深度传感器;深度传感器可以获得所探测场景的深度信息,即场景中的各个物体和传感器之间的距离。示例性的,该深度传感器可以是双目等立体视觉传感器,也可以是激光扫描雷达。深度传感器的具体实现可参考现有技术,此处不再赘述。As described above, while performing target detection, two-dimensional position information of the target object can also be obtained. If it is desired to determine the three-dimensional spatial position of the target object, a depth sensor is also required; the depth sensor can obtain depth information of the detected scene. , the distance between each object and the sensor in the scene. Exemplarily, the depth sensor may be a stereo vision sensor such as binocular or a laser scanning radar. The specific implementation of the depth sensor can refer to the prior art, and details are not described herein again.
步骤103、生成与初始三维空间位置对应的初始虚拟空间声。Step 103: Generate an initial virtual space sound corresponding to the initial three-dimensional spatial position.
其中,虚拟空间声可以根据人耳对声音信号的感知特点,使用信号处理方法对声源到两耳之间的传递函数进行模拟,以重建复杂三维虚拟空间声场。虚拟空间声技术的具体实现可参考现有技术,本申请实施例不再赘述。Among them, the virtual space sound can simulate the transfer function between the sound source and the two ears according to the perceptual characteristics of the human ear to the sound signal to reconstruct the complex three-dimensional virtual space sound field. For a specific implementation of the virtual space sound technology, reference may be made to the prior art, and details are not repeatedly described in the embodiments of the present application.
用户可利用本步骤中生成的虚拟空间声获得目标物体所在位置的提示。The user can use the virtual space sound generated in this step to obtain a prompt of the location of the target object.
步骤104、在用户寻物过程中,实时更新目标物体相对于用户的三维空间位置。Step 104: Update the three-dimensional spatial position of the target object relative to the user in real time during the user's object searching process.
步骤105、生成与更新后的三维空间位置对应的新的虚拟空间声。Step 105: Generate a new virtual space sound corresponding to the updated three-dimensional space position.
实际应用中,用户在根据初始虚拟空间声寻物的过程中,用户的位置也在不断的变化。因此,本申请实施例中,辅助用户寻物设备还要不断的跟踪用户的当前位置并不断的生成新的虚拟空间声以不断的给出用户最新的提示。 In practical applications, the user's position is constantly changing during the process of searching for objects according to the initial virtual space. Therefore, in the embodiment of the present application, the auxiliary user object-seeking device continuously tracks the current position of the user and continuously generates a new virtual space sound to continuously give the user the latest prompt.
本申请提供的辅助用户寻物的方法,首先确定用户待寻找的物体(本申请称之为目标物体)以及目标物体相对于用户的初始三维空间位置;生成与初始三维空间位置对应的初始虚拟空间声;在用户寻物过程中,再实时更新目标物体相对于用户当前位置的三维空间位置,并生成与更新后的三维空间位置对应的新的虚拟空间声。与现有技术中,仅在初始时为用户提供其所在场景中所有物体的信息,并由用户自行决策以及寻物相比,本申请中通过将目标物体相对于用户的三维空间位置转化为虚拟空间声,将目标物体的视觉位置信息转化为声音信息,使得盲人能够根据该虚拟空间声判断目标物体的位置;且在盲人寻物过程中,随着盲人的位置不断变化,目标物体相对于盲人的三维空间位置也不断变化,基于变化的三维空间位置,实时更新虚拟空间声,有助于盲人准确判断目标物体的位置,进而及时有效的帮助盲人寻物。The method for assisting a user to find objects by the present application first determines an object to be sought by the user (referred to as a target object in the present application) and an initial three-dimensional spatial position of the target object with respect to the user; and generates an initial virtual space corresponding to the initial three-dimensional spatial position. Sound; in the process of searching for objects, the three-dimensional spatial position of the target object relative to the current position of the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated. In the prior art, only the information of all objects in the scene in the scene is provided to the user at the initial time, and the user decides and searches for the object in comparison with the object. In the present application, the position of the target object relative to the user is converted into a virtual space. The spatial sound converts the visual position information of the target object into sound information, so that the blind person can judge the position of the target object according to the virtual space sound; and in the process of searching for the blind person, the target object is relatively blind as the position of the blind person changes continuously. The position of the three-dimensional space is also constantly changing. Based on the changed position of the three-dimensional space, the virtual space sound is updated in real time, which helps the blind person to accurately judge the position of the target object, thereby helping the blind person to find things in time.
实际应用中,在根据步骤101确定目标物体时,如果根据用户输入的提示信息会检测到多个候选目标物体,则向用户发出提示信息,用于提示用户从所检测到的多个候选目标物体中确定最终的目标物体。其中,该提示信息可以为提示用户输入更多的关键词。或者也可以为将检测到的每个候选目标物体的信息提供给用户以便于用户再次输入提示信息以确定最终的目标物体。In an actual application, when the target object is determined according to step 101, if a plurality of candidate target objects are detected according to the prompt information input by the user, prompt information is sent to the user for prompting the user to detect the plurality of candidate target objects from the detected target objects. Determine the final target object. The prompt information may prompt the user to input more keywords. Alternatively, information for each candidate target object detected may be provided to the user so that the user inputs the prompt information again to determine the final target object.
示例性的,用户想要寻找的目标物体为杯子,而实际检测到咖啡杯、红色马克杯等多个杯子,此时发出提醒,提示用户再次输入更为详细的提示信息,如颜色,功能等,然后根据用户的响确定最终的目标物体,如红色马克杯。Exemplarily, the target object that the user wants to find is a cup, and actually detects a plurality of cups such as a coffee cup, a red mug, etc., and then issues a reminder to prompt the user to input more detailed prompt information, such as color, function, etc. Then determine the final target object, such as a red mug, based on the user's response.
可选的,为了为用户提供良好的体验,本申请实施例中在生成虚拟空间声时,还可以根据目标物体的种类确定虚拟空间声的声音类别以使得所述虚拟空间声的声音类别与所述目标物体的种类对应。示例性的,当用户待寻找的物体为水杯时,可以设定虚拟空间声为流水的声音。当用户待寻找的物体为汽车时,可以设定虚拟空间声为鸣笛声;当用户待寻找的物体是手机时,可以设定虚拟空间 声为电话铃声Optionally, in order to provide a good experience for the user, when generating the virtual space sound in the embodiment of the present application, the sound category of the virtual space sound may also be determined according to the type of the target object, so that the sound category and the sound of the virtual space sound are The type of the target object corresponds. Exemplarily, when the object to be searched by the user is a water cup, the sound of the virtual space sound can be set as a flowing water. When the object to be searched by the user is a car, the virtual space sound can be set as the whistle sound; when the object to be searched by the user is the mobile phone, the virtual space can be set. Voice ringing
或者,所述虚拟空间声也可以为目标物体的名称。例如:当用户待寻找的物体为汽车时,可以设定虚拟空间声为不断重复的“汽车”发音。Alternatively, the virtual space sound may also be the name of the target object. For example, when the object to be searched by the user is a car, the virtual space sound can be set to continuously repeat the "car" pronunciation.
实际应用中,为了保证在用户寻物过程中目标物体的唯一性,本申请实施例中,在确定了目标物体后采用跟踪技术锁定目标物体。因此,在所述步骤101“确定目标物体”之后,如图3所示,所述方法还包括:In an actual application, in order to ensure the uniqueness of the target object during the object searching process, in the embodiment of the present application, the tracking target technology is used to lock the target object after the target object is determined. Therefore, after the step 101 “determining the target object”, as shown in FIG. 3, the method further includes:
201、实时跟踪已确定的所述目标物体和所述用户。201. Track the determined target object and the user in real time.
其中,跟踪目标物体的具体实现可参考现有技术中的目标跟踪技术,如基于计算机视觉技术的跟踪技术等,此处不再赘述。For the specific implementation of the tracking target object, reference may be made to the target tracking technology in the prior art, such as a tracking technology based on computer vision technology, and details are not described herein.
一种可用的检测目标物体的方法是在用户寻物过程中实时连续的进行目标检测,以不断确定目标物体。但这种方式可能会带来如下不足:在某次检测过程中,检测到的目标物体和初始检测到的目标物体不一样,或者又检测到新的目标物体。本申请中,在确定目标物体后通过跟踪目标物体,能够达到“锁定”目标物体的作用,保证用户寻物过程中目标物体的唯一性。One available method for detecting a target object is to continuously perform target detection in real time during the user's object-seeking process to continuously determine the target object. However, this method may bring the following disadvantages: during a certain detection process, the detected target object is different from the initially detected target object, or a new target object is detected. In the present application, by tracking the target object after determining the target object, the role of the "locking" target object can be achieved, and the uniqueness of the target object during the object searching process can be ensured.
相应的,步骤104中的“在用户取物过程中,实时更新所述目标物体相对于所述用户的三维空间位置”具体包括:Correspondingly, in the process of the user fetching, the real-time updating of the three-dimensional spatial position of the target object relative to the user in the process of the user fetching includes:
202、在用户取物过程中,根据对所述目标物体和用户的跟踪结果,分别确定所述目标物体和所述用户的当前位置。202. Determine, during the user picking process, the current location of the target object and the user according to the tracking result of the target object and the user.
203、根据所述目标物体和所述用户的当前位置,确定所述目标物体相对于所述用户的更新后的三维空间位置。203. Determine an updated three-dimensional spatial location of the target object relative to the user according to the target object and a current location of the user.
实际应用中,虚拟空间声一般反映的是位置关系,无法反映距离关系。例如:当目标物体位于用户的正前方时,该虚拟空间声只能提示用户该目标物体位于前方,但无法体现目标物体与用户的距离。因此,为了更好的提示用户其与目标物体的距离,如图4所示,所述步骤104“在用户寻物过程中,实时更新目标物体相对于用户当前位置的三维空间位置”,之后,所述方法还包括: In practical applications, the virtual space sound generally reflects the positional relationship and cannot reflect the distance relationship. For example, when the target object is located directly in front of the user, the virtual space sound can only prompt the user that the target object is located in front, but cannot reflect the distance between the target object and the user. Therefore, in order to better prompt the user of the distance from the target object, as shown in FIG. 4, the step 104 “updates the three-dimensional spatial position of the target object relative to the current position of the user in real time during the user's object searching process”, after that, The method further includes:
步骤301、检测所述更新后的三维空间位置与更新前的三维空间位置相比是否靠近所述用户。Step 301: Detect whether the updated three-dimensional spatial location is closer to the user than the updated three-dimensional spatial location.
如果是,则表明目标物体更加靠近用户,则相应的在执行步骤105“生成与更新后的三维空间位置对应的新的虚拟空间声”时,具体可以为执行下述步骤302。If yes, it indicates that the target object is closer to the user, and the corresponding step 302 may be performed when performing the step 105 “generating a new virtual space sound corresponding to the updated three-dimensional space position”.
如果否,则表明目标物体越来越远离用户,则可以减小虚拟空间声的频率。If not, it indicates that the target object is getting farther away from the user, and the frequency of the virtual space sound can be reduced.
步骤302、生成与更新后的三维空间位置对应的新的虚拟空间声,且所述新的虚拟空间声的频率大于所述更新前的三维空间位置对应的虚拟空间声。Step 302: Generate a new virtual space sound corresponding to the updated three-dimensional spatial position, and the frequency of the new virtual space sound is greater than the virtual space sound corresponding to the three-dimensional spatial position before the update.
可选的,除了通过提高虚拟空间声的频率来提示用户其越来越靠近目标物体,也可以通过调大虚拟空间声的音量等方式提示用户其越来越靠近目标物体;通过调小虚拟空间声的音量提示用户其越来越远离目标物体。Optionally, in addition to prompting the user to get closer to the target object by increasing the frequency of the virtual space sound, the user may be prompted to be closer to the target object by increasing the volume of the virtual space sound; The volume of the sound prompts the user to move away from the target object.
实际应用中,当用户与目标物体的距离已经很近时,用户的移动幅度也比较小,目标物体相对于用户的位置变化也很小,相应的生成的虚拟空间声的区别可能也很小。在这种情况下,仍然以虚拟空间声来指导用户行进其意义实际上也比较小。因此,为了继续给用户提供及时有效的提示,在步骤104“生成与更新后的三维空间位置对应的新的虚拟空间声”之后,如图5所示,所述方法还包括:In practical applications, when the distance between the user and the target object is already very close, the movement range of the user is also small, and the position change of the target object relative to the user is also small, and the difference of the corresponding generated virtual space sound may also be small. In this case, the virtual space sound is still used to guide the user to travel. The meaning is actually small. Therefore, in order to continue to provide the user with timely and effective prompts, after the step of "generating a new virtual space sound corresponding to the updated three-dimensional space position", as shown in FIG. 5, the method further includes:
步骤401、当所述目标物体与所述用户的距离小于预设阈值时,语音提示用户以指导用户逐渐靠近所述目标物体。Step 401: When the distance between the target object and the user is less than a preset threshold, the voice prompts the user to guide the user to gradually approach the target object.
其中,所述预设阈值可以为根据实际需要设定好的。The preset threshold may be set according to actual needs.
示例性的,当目标物体与用户的距离很近,例如位于用户的左侧且伸手可触及到的位置时,可以语音提示用户位于用户的左侧即可。这种提示方式无需经过复杂的生成虚拟空间声的过程,其简单有效。Illustratively, when the target object is very close to the user, for example, located on the left side of the user and reachable to the user, the user can be voiced to be located on the left side of the user. This prompting method does not require a complicated process of generating a virtual space sound, and is simple and effective.
考虑到图像检测设备的检测精度有限,当用户佩戴前文所述的头盔行走时或者前文所述移动机器人在行走时,目标物体可能会移 出图像检测设备的视场范围导致跟踪失败;或者,还有的可能情况是,目标物体被遮挡,这些都可能会对寻物造成干扰或者造成寻物失败。在这种情况下,要及时给用户发出相应的提醒以通知用户调整其所在的位置等,同时系统自动重新从步骤101开始执行该方法,如果在预设时间内或经过多次调整后,仍然无法检测到目标物体,则可以提示用户是否结束此次寻物。Considering that the detection accuracy of the image detecting apparatus is limited, when the user wears the helmet described above, or when the mobile robot is walking as described above, the target object may move. The field of view of the image detecting device causes the tracking to fail; or, in some cases, the target object is occluded, which may cause interference to the object or cause the object to fail. In this case, the user should be promptly given a corresponding reminder to inform the user to adjust the location of the user, etc., and the system automatically restarts the method from step 101, if after a preset time or after multiple adjustments, If the target object cannot be detected, the user can be prompted to end the search.
本申请实施例提供的上述方法可辅助任何用户寻物,例如:辅助盲人用户寻物,或者普通用户佩戴能够实现上述方法的头盔进行寻物游戏等。The above method provided by the embodiment of the present application can assist any user to find objects, for example, assisting a blind user to find objects, or an ordinary user wearing a helmet capable of implementing the above method to perform a game of searching or the like.
可以理解的是,上述辅助用户寻物设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。It can be understood that the above-mentioned auxiliary user object-seeking device includes corresponding hardware structures and/or software modules for executing the respective functions in order to implement the above functions. Those skilled in the art will readily appreciate that the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
本申请实施例可以根据上述方法示例对辅助用户寻物设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiment of the present application may divide the function module of the auxiliary user object-seeking device or the like according to the above method example. For example, each function module may be divided according to each function, or two or more functions may be integrated into one processing module. . The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
在采用对应各个功能划分各个功能模块的情况下,图6示出了上述实施例中所涉及的辅助用户寻物设备的一种可能的结构示意图,辅助用户寻物设备包括:检测单元501、位置确定单元502以及虚拟空间声生成单元503。检测单元501用于支持辅助用户寻物设备执行图2中的过程101;位置确定单元502用于支持辅助用户寻物设备执行步骤102、步骤104、步骤202、步骤203、步骤301、步骤302和步骤401;虚拟空间声生成单元503用于支持辅助用户寻物设备执行步骤103、步骤105和步骤303。FIG. 6 is a schematic diagram showing a possible structure of the auxiliary user object-seeking device involved in the above embodiment, and the auxiliary user object-seeking device includes: a detecting unit 501, a position, in a case where each function module is divided by a corresponding function. The determining unit 502 and the virtual space sound generating unit 503. The detecting unit 501 is configured to support the auxiliary user searching device to perform the process 101 in FIG. 2; the position determining unit 502 is configured to support the auxiliary user searching device to perform step 102, step 104, step 202, step 203, step 301, step 302, and Step 401: The virtual space sound generating unit 503 is configured to support the auxiliary user object searching device to perform step 103, step 105, and step 303.
可选的,如图7所示,上述实施例中所涉及的辅助用户寻物设备还包 括跟踪单元601,用于支持辅助用户寻物设备执行步骤201。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。Optionally, as shown in FIG. 7 , the auxiliary user object-seeking device involved in the foregoing embodiment further includes The tracking unit 601 is configured to support the auxiliary user object searching device to perform step 201. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
在采用集成的单元的情况下,图8示出了上述实施例中所涉及的的一种可能的结构示意图。辅助用户寻物设备包括:处理模块701和通信模块702。处理模块701用于对辅助用户寻物设备的动作进行控制管理,例如,处理模块701用于支持辅助用户寻物设备执行图2中的过程101至105,图3过程201至204,图4中的过程301至303,图5中的过程401,和/或用于本文所描述的技术的其它过程。通信模块702用于支持辅助用户寻物设备与其他网络实体的通信,例如与图1中示出的功能模块或网络实体之间的通信。辅助用户寻物设备还可以包括存储模块703,用于存储辅助用户寻物设备的程序代码和数据。In the case of an integrated unit, FIG. 8 shows a possible structural diagram involved in the above embodiment. The auxiliary user object finding device includes a processing module 701 and a communication module 702. The processing module 701 is configured to control and manage the actions of the auxiliary user object searching device. For example, the processing module 701 is configured to support the auxiliary user object searching device to perform the processes 101 to 105 in FIG. 2, and the processes 201 to 204 in FIG. 3, FIG. Processes 301 through 303, process 401 in FIG. 5, and/or other processes for the techniques described herein. The communication module 702 is configured to support communication between the auxiliary user object-seeking device and other network entities, such as with the functional modules or network entities shown in FIG. The auxiliary user object-seeking device may further include a storage module 703 for storing program codes and data of the auxiliary user-seeking device.
其中,处理模块701可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块702可以是收发器、收发电路或通信接口等。存储模块703可以是存储器。The processing module 701 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure. The processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like. The communication module 702 can be a transceiver, a transceiver circuit, a communication interface, or the like. The storage module 703 can be a memory.
当处理模块701为处理器,通信模块702为通信接口,存储模块703为存储器时,本申请实施例所涉及的辅助用户寻物设备可以为图9所示的电子设备。When the processing module 701 is a processor, the communication module 702 is a communication interface, and the storage module 703 is a memory, the auxiliary user object-seeking device involved in the embodiment of the present application may be the electronic device shown in FIG.
参阅图9所示,该电子设备包括:处理器801、通信接口802、存储器803以及总线804。其中,处理器801、通信接口802以及存储器803通过总线804相互连接;总线804可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示 仅有一根总线或一种类型的总线。Referring to FIG. 9, the electronic device includes a processor 801, a communication interface 802, a memory 803, and a bus 804. The processor 801, the communication interface 802, and the memory 803 are connected to each other through a bus 804. The bus 804 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. Wait. The bus can be divided into an address bus, a data bus, a control bus, and the like. For the sake of convenience, only one thick line is shown in Figure 9, but it does not mean There is only one bus or one type of bus.
结合本申请公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。The steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions. The software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and the storage medium can be located in an ASIC.
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。Those skilled in the art will appreciate that in one or more examples described above, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。 The specific embodiments of the present invention have been described in detail with reference to the specific embodiments of the present application. It is to be understood that the foregoing description is only The scope of protection, any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present application are included in the scope of protection of the present application.

Claims (18)

  1. 一种辅助用户寻物的方法,其特征在于,包括:A method for assisting a user in searching for objects, comprising:
    确定目标物体以及所述目标物体相对于所述用户的初始三维空间位置,所述目标物体为用户待寻找的物体;Determining a target object and an initial three-dimensional spatial position of the target object relative to the user, the target object being an object to be sought by a user;
    生成与所述初始三维空间位置对应的初始虚拟空间声;Generating an initial virtual space sound corresponding to the initial three-dimensional spatial position;
    在用户寻物过程中,实时更新所述目标物体相对于所述用户的三维空间位置,并生成与更新后的三维空间位置对应的新的虚拟空间声。During the user's object-seeking process, the three-dimensional spatial position of the target object relative to the user is updated in real time, and a new virtual space sound corresponding to the updated three-dimensional spatial position is generated.
  2. 根据权利要求1所述的方法,其特征在于,所述确定目标物体之后,所述方法还包括:The method according to claim 1, wherein after the determining the target object, the method further comprises:
    实时跟踪已确定的所述目标物体和所述用户;Tracking the determined target object and the user in real time;
    所述实时更新所述目标物体相对于所述用户的三维空间位置,具体包括:The real-time updating of the three-dimensional spatial position of the target object relative to the user includes:
    根据对所述目标物体和用户的跟踪结果,分别确定所述目标物体和所述用户的当前位置;Determining a current location of the target object and the user respectively according to a tracking result of the target object and a user;
    根据所述目标物体和所述用户的当前位置,确定所述目标物体相对于所述用户的更新后的三维空间位置。And determining an updated three-dimensional spatial position of the target object relative to the user according to the target object and a current location of the user.
  3. 根据权利要求1所述的方法,其特征在于,所述虚拟空间声的声音类别与所述目标物体的种类对应。The method according to claim 1, wherein the sound category of the virtual space sound corresponds to the type of the target object.
  4. 根据权利要求1所述的方法,其特征在于,所述生成与更新后的三维空间位置对应的新的虚拟空间声,包括:The method according to claim 1, wherein the generating a new virtual space sound corresponding to the updated three-dimensional spatial position comprises:
    检测所述更新后的三维空间位置与更新前的三维空间位置相比是否靠近所述用户;Detecting whether the updated three-dimensional spatial position is closer to the user than the updated three-dimensional spatial position;
    如果是,则生成与所述更新后的三维空间位置对应的新的虚拟空间声,且所述新的虚拟空间声的频率大于所述更新前的三维空间位置对应的虚拟空间声。If yes, a new virtual space sound corresponding to the updated three-dimensional spatial position is generated, and the frequency of the new virtual space sound is greater than the virtual space sound corresponding to the three-dimensional spatial position before the update.
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述确定目标物体,具体包括:The method according to any one of claims 1 to 4, wherein the determining the target object comprises:
    接收用户输入的第一提示信息,所述第一提示信息用于指定所述 目标物体;Receiving first prompt information input by a user, where the first prompt information is used to specify the Target object
    获取用户所在场景对应的全景图像,并在所述全景图像中进行目标检测以确定所述目标物体。Obtaining a panoramic image corresponding to the scene where the user is located, and performing target detection in the panoramic image to determine the target object.
  6. 根据权利要求5所述的方法,其特征在于,所述在所述全景图像中进行目标检测以确定所述目标物体,具体包括:The method according to claim 5, wherein the performing target detection in the panoramic image to determine the target object comprises:
    当进行目标检测后检测到至少一个候选目标物体时,向用户输出第二提示信息,所述第二提示信息用于提示用户从所述至少一个候选目标物体中选择最终的目标物体;When at least one candidate target object is detected after the target detection is performed, the second prompt information is output to the user, and the second prompt information is used to prompt the user to select the final target object from the at least one candidate target object;
    接收用户对所述第二提示信息的响应,并根据用户的响应确定所述目标物体。Receiving a response of the user to the second prompt information, and determining the target object according to a response of the user.
  7. 根据权利要求1至6任一项所述的方法,其特征在于,在生成与更新后的三维空间位置对应的新的虚拟空间声之后,所述方法还包括:The method according to any one of claims 1 to 6, wherein after the generating a new virtual space sound corresponding to the updated three-dimensional spatial position, the method further comprises:
    当所述目标物体与所述用户的距离小于预设阈值时,语音提示用户以指导用户逐渐靠近所述目标物体。When the distance between the target object and the user is less than a preset threshold, the voice prompts the user to guide the user to gradually approach the target object.
  8. 一种辅助用户寻物的装置,其特征在于,包括:A device for assisting a user in searching for objects, comprising:
    检测单元,用于确定目标物体,所述目标物体为用户待寻找的物体;a detecting unit, configured to determine a target object, where the target object is an object to be searched by a user;
    位置确定单元,用于确定所述检测单元检测到的目标物体相对于所述用户的初始三维空间位置;a position determining unit, configured to determine an initial three-dimensional spatial position of the target object detected by the detecting unit with respect to the user;
    虚拟空间声生成单元,用于生成与所述位置确定单元确定的所述初始三维空间位置对应的初始虚拟空间声;a virtual space sound generating unit, configured to generate an initial virtual space sound corresponding to the initial three-dimensional spatial position determined by the position determining unit;
    所述位置确定单元,还用于在用户寻物过程中,实时更新所述目标物体相对于所述用户的三维空间位置;The location determining unit is further configured to update a three-dimensional spatial location of the target object relative to the user in real time during a user object searching process;
    所述虚拟空间声生成单元,还用于生成与所述位置确定单元确定的更新后的三维空间位置对应的新的虚拟空间声。The virtual space sound generating unit is further configured to generate a new virtual space sound corresponding to the updated three-dimensional space position determined by the position determining unit.
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括跟踪单元,用于实时跟踪已确定的所述目标物体和所述用户;The device according to claim 8, wherein the device further comprises a tracking unit for tracking the determined target object and the user in real time;
    所述位置确定单元,还用于根据所述跟踪单元对所述目标物体和 用户的跟踪结果,分别确定所述目标物体和所述用户的当前位置;根据所述目标物体和所述用户的当前位置,确定所述目标物体相对于所述用户的更新后的三维空间位置。The position determining unit is further configured to: according to the tracking unit, the target object and The tracking result of the user respectively determines the current position of the target object and the user; and determines the updated three-dimensional spatial position of the target object relative to the user according to the target object and the current position of the user.
  10. 根据权利要求8所述的装置,其特征在于,所述虚拟空间声的声音类别与所述目标物体的种类对应。The apparatus according to claim 8, wherein the sound category of the virtual space sound corresponds to the type of the target object.
  11. 根据权利要求8所述的装置,其特征在于,The device of claim 8 wherein:
    所述位置确定单元,还用于检测所述更新后的三维空间位置与更新前的三维空间位置相比是否靠近所述用户;The location determining unit is further configured to detect whether the updated three-dimensional spatial location is closer to the user than the updated three-dimensional spatial location;
    所述虚拟空间声生成单元,还用于在所述位置确定单元检测到更新后的三维空间位置与更新前的三维空间位置相比靠近所述用户时,生成与所述更新后的三维空间位置对应的新的虚拟空间声,且所述新的虚拟空间声的频率大于所述更新前的三维空间位置对应的虚拟空间声。The virtual space sound generating unit is further configured to generate, after the position determining unit detects that the updated three-dimensional space position is closer to the user than the updated three-dimensional space position, generate the updated three-dimensional space position. Corresponding new virtual space sound, and the frequency of the new virtual space sound is greater than the virtual space sound corresponding to the three-dimensional spatial position before the update.
  12. 根据权利要求8至11任一项所述的装置,其特征在于,所述检测单元,还用于接收用户输入的第一提示信息,所述第一提示信息用于指定所述目标物体;获取用户所在场景对应的全景图像,并在所述全景图像中进行目标检测以确定所述目标物体。The device according to any one of claims 8 to 11, wherein the detecting unit is further configured to receive first prompt information input by a user, where the first prompt information is used to specify the target object; A panoramic image corresponding to the scene in which the user is located, and performing target detection in the panoramic image to determine the target object.
  13. 根据权利要求12所述的装置,其特征在于,所述检测单元,还用于当检测到至少一个候选目标物体时,向用户输出第二提示信息,所述第二提示信息用于提示用户从所述至少一个候选目标物体中选择最终的目标物体;以及接收用户对所述第二提示信息的响应,并根据用户的响应确定所述目标物体。The device according to claim 12, wherein the detecting unit is further configured to: when the at least one candidate target object is detected, output second prompt information to the user, where the second prompt information is used to prompt the user to Selecting a final target object from the at least one candidate target object; and receiving a response of the user to the second prompt information, and determining the target object according to a response of the user.
  14. 根据权利要求8至13任一项所述的装置,其特征在于,所述位置检测单元,还用于当所述目标物体与所述用户的距离小于预设阈值时,语音提示用户以使得用户逐渐靠近所述目标物体。The device according to any one of claims 8 to 13, wherein the position detecting unit is further configured to: when the distance between the target object and the user is less than a preset threshold, prompting the user to make the user Gradually close to the target object.
  15. 一种电子设备,其特征在于,包括:存储器、通信接口和处理器,所述存储器用于存储计算机可执行代码,所述处理器用于执行所述计算机可执行代码控制执行权利要求1-7任一项所述的辅助用户寻物方法,所述通信接口用于所述电子设备与外部设备的数据传输。 An electronic device, comprising: a memory, a communication interface, and a processor, the memory for storing computer executable code, the processor for executing the computer executable code control to perform any of claims 1-7 A method for assisting a user to find objects, the communication interface being used for data transmission between the electronic device and an external device.
  16. 一种机器人,其特征在于,包括权利要求15所述的电子设备。A robot comprising the electronic device of claim 15.
  17. 一种计算机存储介质,其特征在于,用于储存计算机软件指令,其包含执行权利要求1-7任一项所述的辅助用户寻物的方法所设计的程序代码。A computer storage medium, characterized by storing computer software instructions comprising program code designed to perform the method of assisting a user to find objects as claimed in any one of claims 1-7.
  18. 一种计算机程序产品,其特征在于,可直接加载到计算机的内部存储器中,并含有软件代码,所述软件代码经由计算机载入并执行后能够实现权利要求1-7任一项所述的辅助用户寻物的方法。 A computer program product, characterized in that it can be directly loaded into an internal memory of a computer and contains software code, and the software code can be loaded and executed by a computer to implement the assistance according to any one of claims 1 to 7. User search method.
PCT/CN2016/113534 2016-12-30 2016-12-30 Method and device for assisting user in finding object WO2018120033A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/113534 WO2018120033A1 (en) 2016-12-30 2016-12-30 Method and device for assisting user in finding object
CN201680007027.0A CN107278301B (en) 2016-12-30 2016-12-30 Method and device for assisting user in finding object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113534 WO2018120033A1 (en) 2016-12-30 2016-12-30 Method and device for assisting user in finding object

Publications (1)

Publication Number Publication Date
WO2018120033A1 true WO2018120033A1 (en) 2018-07-05

Family

ID=60052252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113534 WO2018120033A1 (en) 2016-12-30 2016-12-30 Method and device for assisting user in finding object

Country Status (2)

Country Link
CN (1) CN107278301B (en)
WO (1) WO2018120033A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672057A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Shooting method and device

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108426578B (en) * 2017-12-29 2024-06-25 达闼科技(北京)有限公司 Navigation method based on cloud, electronic equipment and readable storage medium
CN108366338B (en) * 2018-01-31 2021-07-16 联想(北京)有限公司 Method and device for searching electronic equipment
CN111652197B (en) * 2018-02-08 2023-04-18 创新先进技术有限公司 Method and device for detecting entering and leaving states
US10860165B2 (en) 2018-09-26 2020-12-08 NextVPU (Shanghai) Co., Ltd. Tracking method and apparatus for smart glasses, smart glasses and storage medium
KR102242719B1 (en) * 2018-09-26 2021-04-23 넥스트브이피유 (상하이) 코포레이트 리미티드 Smart glasses tracking method and device, and smart glasses and storage media
CN110955043B (en) * 2018-09-26 2024-06-18 上海肇观电子科技有限公司 Intelligent glasses focus tracking method and device, intelligent glasses and storage medium
CN110559127A (en) * 2019-08-27 2019-12-13 上海交通大学 intelligent blind assisting system and method based on auditory sense and tactile sense guide
CN111121749B (en) * 2019-12-26 2023-05-23 韩可 Navigation method of 3D sound effect augmented reality blind person navigation system based on neural network
CN111443650B (en) * 2020-06-15 2020-10-16 季华实验室 Terminal for robot blind guiding dog, safety control method of terminal and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040808A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for helping the blindman to fetching things based on hearing
CN101040810A (en) * 2007-04-19 2007-09-26 上海交通大学 Blindman assisting device based on object identification
CN204744865U (en) * 2015-06-08 2015-11-11 深圳市中科微光医疗器械技术有限公司 Device for environmental information around reception and registration of visual disability personage based on sense of hearing
CN105223551A (en) * 2015-10-12 2016-01-06 吉林大学 A kind of wearable auditory localization tracker and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7844462B2 (en) * 2007-02-01 2010-11-30 Sap Ag Spatial sound generation for screen navigation
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN105761235A (en) * 2014-12-19 2016-07-13 天津市巨海机电设备安装有限公司 Vision auxiliary method converting vision information to auditory information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101040808A (en) * 2007-04-19 2007-09-26 上海交通大学 Method for helping the blindman to fetching things based on hearing
CN101040810A (en) * 2007-04-19 2007-09-26 上海交通大学 Blindman assisting device based on object identification
CN204744865U (en) * 2015-06-08 2015-11-11 深圳市中科微光医疗器械技术有限公司 Device for environmental information around reception and registration of visual disability personage based on sense of hearing
CN105223551A (en) * 2015-10-12 2016-01-06 吉林大学 A kind of wearable auditory localization tracker and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672057A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Shooting method and device
CN112672057B (en) * 2020-12-25 2022-07-15 维沃移动通信有限公司 Shooting method and device

Also Published As

Publication number Publication date
CN107278301A (en) 2017-10-20
CN107278301B (en) 2020-12-08

Similar Documents

Publication Publication Date Title
WO2018120033A1 (en) Method and device for assisting user in finding object
WO2021164469A1 (en) Target object detection method and apparatus, device, and storage medium
EP2509070B1 (en) Apparatus and method for determining relevance of input speech
JP6348574B2 (en) Monocular visual SLAM using global camera movement and panoramic camera movement
JP2020095748A (en) Detection for visual inattention based on eye convergence
US20190317594A1 (en) System and method for detecting human gaze and gesture in unconstrained environments
JP2022504704A (en) Target detection methods, model training methods, equipment, equipment and computer programs
KR102410879B1 (en) Method and device for obtaining positioning information and medium
WO2020000395A1 (en) Systems and methods for robust self-relocalization in pre-built visual map
WO2019144263A1 (en) Control method and device for mobile platform, and computer readable storage medium
CN114267041B (en) Method and device for identifying object in scene
WO2020020375A1 (en) Voice processing method and apparatus, electronic device, and readable storage medium
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
KR102324001B1 (en) Position and posture detection method and device, electronic device and storage medium
US9301722B1 (en) Guiding computational perception through a shared auditory space
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
US10583067B2 (en) Source-of-sound based navigation for a visually-impaired user
WO2022193456A1 (en) Target tracking method, apparatus, electronic device, and storage medium
JP7224592B2 (en) Information processing device, information processing method, and program
WO2020037553A1 (en) Image processing method and device, and mobile device
JP7280888B2 (en) Electronic device determination method, system, computer system and readable storage medium
WO2019233299A1 (en) Mapping method and apparatus, and computer readable storage medium
Lin et al. An eyeglasses-like stereo vision system as an assistive device for visually impaired
WO2022028348A1 (en) Object detection method, electronic device and computer-readable storage medium
Yang Machine Vision Navigation System for Visually Impaired People

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16925353

Country of ref document: EP

Kind code of ref document: A1