CN109472825B - Object searching method and terminal equipment - Google Patents

Object searching method and terminal equipment Download PDF

Info

Publication number
CN109472825B
CN109472825B CN201811204693.0A CN201811204693A CN109472825B CN 109472825 B CN109472825 B CN 109472825B CN 201811204693 A CN201811204693 A CN 201811204693A CN 109472825 B CN109472825 B CN 109472825B
Authority
CN
China
Prior art keywords
target object
information
depth information
model
terminal equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811204693.0A
Other languages
Chinese (zh)
Other versions
CN109472825A (en
Inventor
冯天良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811204693.0A priority Critical patent/CN109472825B/en
Publication of CN109472825A publication Critical patent/CN109472825A/en
Application granted granted Critical
Publication of CN109472825B publication Critical patent/CN109472825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Abstract

The embodiment of the invention discloses an object searching method and terminal equipment, wherein the object searching method is applied to the terminal equipment comprising a TOF camera, and the method comprises the following steps: collecting depth information of a target object within a visual angle range through a TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of the target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user. By the object searching method provided by the invention, the time and energy for searching the object to be searched by the terminal equipment user can be saved.

Description

Object searching method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an object searching method and terminal equipment.
Background
In daily life, people often have a need to find articles in dark environments, such as: when the family members go home after overtime, the family members have already turned off the light for rest, and in the case that the family members want to find a certain article, but do not want to turn on the light or turn on the electric torch of the terminal equipment to influence the rest of the family members, the family members can only be groped for finding the article in the dark environment. When the article is found in a groping, the article inevitably collides with other furniture to cause unnecessary collision, and a great amount of time and energy of an article finder are consumed.
It is obvious that, when looking for articles in dark environment, the article finder consumes a lot of time and energy.
Disclosure of Invention
The embodiment of the invention provides an object searching method, which aims to solve the problems that in the prior art, the time consumption is long and a great deal of energy is consumed when articles are searched.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an object search method, which is applied to a terminal device including a TOF camera, where the method includes: collecting depth information of a target object within a visual angle range through a TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of the target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user.
In a second aspect, an embodiment of the present invention provides a terminal device, which includes a TOF camera, where the terminal device includes: the acquisition module is used for acquiring the depth information of the target object within the view angle range through the TOF camera; the acquisition module is used for acquiring a 3D model of the target object based on the depth information of the target object under the condition that the target object belongs to a preset object set; a display module for displaying a 3D model of the target object; the first receiving module is used for receiving a first input of a terminal equipment user; and the prompt output module is used for responding to the first input and outputting distance prompt information of the target object, and the distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, where the computer program, when executed by the processor, implements the steps of any one of the object search methods described in the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the object search methods described in the embodiments of the present invention.
In the embodiment of the invention, the depth information of the target object in the range of the visual angle is collected by a TOF (Time of Flight) camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of a target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the target object, namely the real-time distance between the object to be searched and the terminal equipment user, quickly positioning the object to be searched, and outputting the distance prompt information to guide the terminal equipment user to reach the position where the object to be searched is located, so that the time and the energy for searching the object to be searched by the terminal equipment user are saved.
Drawings
Fig. 1 is one of flowcharts of an object search method provided according to an embodiment of the present invention;
FIG. 2 is a second flowchart of an object searching method according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, one of flowcharts of an object searching method provided by an embodiment of the present invention is shown.
The object searching method of the embodiment of the invention comprises the following steps:
step 101: and collecting depth information of the target object within the visual angle range through the TOF camera.
The object searching method provided by the embodiment of the invention is applied to the terminal equipment comprising the TOF camera. The TOF camera continuously emits light pulses through the infrared emitter, the light pulses are reflected after meeting people or objects, the sensor receives light reflected by the people or the objects, and stereoscopic vision of the people or the objects is formed by calculating time difference between emission and reflection of infrared light.
When a terminal device user searches for a certain object, namely a target object, in a dark environment, the TOF camera of the terminal device can be started, a 3D model of each collected object is respectively constructed based on depth information of each object collected by the TOF camera, and then the 3D model is displayed so that the terminal device user can determine whether the object is the target object to be searched or not according to the 3D model. In the embodiment of the present invention, the depth information of only one object collected within the range of the view angle of the TOF camera is taken as an example for description. In a specific implementation process, depth information of a plurality of objects may be acquired within a viewing angle range of the TOF camera, and accordingly, a 3D model of each object needs to be acquired respectively according to the acquired depth information of each object for display.
In a specific implementation process, an end user may create an object set in advance, where the object set includes two-dimensional planar images of a plurality of objects. The objects contained in the object set may be all or part of the objects placed in a certain area, or may be objects frequently used by a terminal device user, or may be some objects which are small in size and not easy to find, and the like.
When the depth information of the target object is acquired, whether the target object belongs to a preset object set needs to be judged, and if yes, a subsequent process is executed; if not, returning to the step 101 to re-collect the depth information of the object within the view angle range.
In case that the target object belongs to a preset object set, depth information of the target object is stored. Storing the depth information of the target object facilitates subsequent accurate construction of a 3D model of the target object from the depth information.
Step 102: and under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on the depth information of the target object.
When the 3D model of the target object is obtained based on the depth information of the target object, the 3D model of the target object can be constructed by the terminal equipment based on the depth information of the target object; or the terminal equipment uploads the depth information of the target object to the server, and the 3D model constructed based on the depth information of the target object and sent by the server is obtained.
The 3D model is built by the terminal device, so that time consumed for uploading the depth information of the target object to the server and receiving the 3D model sent by the server can be saved, and the efficiency of obtaining the 3D model of the target object is improved. The server constructs the 3D model and sends the 3D model to the terminal equipment, so that the processing burden of the terminal equipment can be effectively reduced.
Step 103: a 3D model of the target object is displayed.
And after the 3D model of the target object is acquired, displaying the acquired 3D model on a current interface of the terminal equipment.
Step 104: a first input by a user of a terminal device is received.
After the 3D model is displayed, inquiry information can be popped up to inquire whether the 3D model displayed by the terminal equipment user is an object to be searched. If the terminal equipment user determines that the displayed 3D model is the object to be searched, a determination instruction, namely a first input, can be input.
If the 3D models of the target objects are displayed in the shooting preview interface, the terminal equipment can detect the selection operation of a terminal equipment user on a certain 3D model; and determining the target object corresponding to the selected 3D model as the object to be searched. Wherein the selection operation of a certain 3D model is a first input by the user of the terminal device.
Step 105: and responding to the first input, and outputting the distance prompt information of the target object.
The distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user. The terminal equipment user can easily find the object to be searched through the distance prompt information.
The distance prompt information may be text information, voice information, or information combining text and voice, and the specific type of the distance prompt information is not particularly limited in the embodiment of the present invention.
According to the object searching method provided by the embodiment of the invention, the depth information of the target object in the visual angle range is collected through the TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of a target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the target object, namely the real-time distance between the object to be searched and the terminal equipment user, quickly positioning the object to be searched, and outputting the distance prompt information to guide the terminal equipment user to reach the position where the object to be searched is located, so that the time and the energy for searching the object to be searched by the terminal equipment user are saved.
Referring to fig. 2, a second flowchart of the object searching method according to the embodiment of the present invention is shown.
The object searching method in the embodiment of the invention specifically comprises the following steps:
step 201: and receiving object information of an object to be searched, which is input by a terminal equipment user.
The object information may include at least one of: object name, object characteristics; in the case where the object information includes an object feature, the object feature includes at least one of an object type and an object size; wherein, the same object type corresponds to at least one object model. The object information of the objects to be searched is set to be the specific classification, so that the objects to be searched can be quickly and accurately determined from the target objects searched in the visual angle range when the objects to be searched are searched.
According to the embodiment of the invention, the object set is created through preselection, each object in the object set can be classified according to the object characteristics, each object is divided according to the object type and the object size, and each object in the object set can be classified in a more detailed mode according to the affiliated characteristics.
In the embodiment of the invention, when the terminal equipment user searches the object to be searched by the terminal equipment, the object information of the object to be searched is input, so that the terminal equipment can conveniently search the object to be searched according to the object characteristics through the object information or the object characteristics of the object to be searched, and the searching efficiency is improved.
Step 202: and storing the object information into the object set.
The object information of each object to be searched is stored in the object set, so that the object characteristics can be determined according to the object information of the object to be searched when the object to be searched is searched subsequently, and whether the object to be searched is contained in the target object or not can be judged in a targeted mode according to the object characteristics.
Step 203: and collecting depth information of the target object within the visual angle range through the TOF camera.
The viewing angle range of the TOF camera can be adjusted according to requirements when used by a terminal device user, which is not specifically limited in the embodiments of the present invention.
And after the depth information of the target object is acquired, storing the depth information of the target object. In the specific implementation process, different strategies can be adopted to store the depth information of the target object, which are respectively as follows:
can be saved according to the view angle range: the view angle range of the TOF camera is fixed and unchanged, only the depth information of the target object within the view angle range can be stored, for example, if the view angle range of the current TOF camera is 80 degrees, and the current view angle is 0 degree to 80 degrees, only the depth information of the target object within the range of 0 degree to 80 degrees is stored, and if the new view angle after the camera is rotated is 30 degrees to 110 degrees, the previously stored depth information of the target object within the range of 0 degree to 80 degrees is cleared, and the depth information of the target object within the range of 30 degrees to 110 degrees is stored.
And storing according to whether the target object belongs to the preset object set. If the target object belongs to a locally preset object set, saving the depth information of the target object; otherwise, the depth information of the target object is not saved. According to the object characteristics of the object to be searched. If the object feature of the object to be searched is circular, only the depth information of the circular target object in the target object is retained.
According to the category of the object to be searched. The two-dimensional plane image set can be established according to the category of the object, for example, the book class corresponds to one or more two-dimensional plane images, the remote controller class corresponds to one or more two-dimensional plane images, and the like. If the object key word to be searched is a book, only the depth information of the target object of which the two-dimensional plane image is the book is stored, and if the object key word to be searched is a remote controller, only the depth information of the target object of which the two-dimensional plane image is the remote controller is stored.
According to the size of the object to be searched. For example, if the size of the target object to be searched is less than 8.2 inches, only the depth information of the target object less than 8.2 inches is saved.
Step 204: in case the object information of the target object matches at least one object information of the set of objects, a 3D model of the target object is obtained based on the depth information of the object.
When the 3D model of the target object is obtained based on the depth information of the target object, the 3D model of the target object can be constructed by the terminal equipment based on the depth information of the target object; or the terminal equipment uploads the depth information of the target object to the server, and the 3D model constructed based on the depth information of the target object and sent by the server is obtained.
The 3D model is built by the terminal device, so that time consumed for uploading the depth information of the target object to the server and receiving the 3D model sent by the server can be saved, and the efficiency of obtaining the 3D model of the target object is improved. The server constructs the 3D model and sends the 3D model to the terminal equipment, so that the processing burden of the terminal equipment can be effectively reduced.
Step 205: a 3D model of the target object is displayed.
And after the 3D model of the target object is acquired, displaying the acquired 3D model on a current interface of the terminal equipment.
Step 206: a first input by a user of a terminal device is received.
After the 3D model is displayed, inquiry information can be popped up to inquire whether the 3D model displayed by the terminal equipment user is an object to be searched. If the terminal equipment user determines that the displayed 3D model is the object to be searched, a determination instruction, namely a first input, can be input.
If the 3D models of the target objects are displayed in the shooting preview interface, the terminal equipment can detect the selection operation of a terminal equipment user on a certain 3D model; and determining the target object corresponding to the selected 3D model as an object to be searched. Wherein the selection operation of a certain 3D model is a first input by the user of the terminal device.
Step 207: in response to the first input, distance hint information for the target object is output.
The distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user.
One way to preferably output the distance hint information of the target object is to: and in the process that the distance between the terminal equipment and the target object is gradually reduced, playing the voice prompt information in real time until the distance between the terminal equipment and the target object is smaller than a preset threshold value. The voice prompt information is used for prompting the distance between the terminal equipment user and the target object. The preset threshold may be set by a person skilled in the art according to actual requirements, for example, set to be 0.1 meter, 0.2 meter, or 0.05 meter, and the like, which is not particularly limited in the embodiment of the present invention.
According to the method for outputting the distance prompt information, a terminal device user does not need to preview a terminal device display interface, the terminal device user can quickly and accurately find the target object only through voice prompt, the terminal user can operate conveniently, and the use experience of the terminal user can be improved.
According to the object searching method provided by the embodiment of the invention, the depth information of the target object in the visual angle range is collected through the TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of a target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the target object, namely the real-time distance between the object to be searched and the terminal equipment user, quickly positioning the object to be searched, and outputting the distance prompt information to guide the terminal equipment user to reach the position where the object to be searched is located, so that the time and the energy for searching the object to be searched by the terminal equipment user are saved. In addition, according to the object searching method provided by the embodiment of the invention, on one hand, the object to be searched is searched according to the input object information of the object to be searched, the object to be searched can be rapidly determined from the collected target objects in a targeted manner, and the searching efficiency is improved. On the other hand, in the process that the distance between the terminal equipment and the target object is gradually reduced, the voice prompt information is played in real time, the terminal equipment user does not need to preview a display interface of the terminal equipment, the terminal equipment user can quickly and accurately find the target object only through voice prompt, the terminal user operation is facilitated, and the use experience of the terminal user can be improved.
Referring to fig. 3, a block diagram of a terminal device according to an embodiment of the present invention is shown. The terminal device can realize the details of the object searching method in the foregoing embodiment, and achieve the same effect.
The terminal equipment of the embodiment of the invention comprises a TOF camera, and the terminal equipment further comprises: the acquisition module 301 is configured to acquire depth information of a target object within a view angle range through the TOF camera; an obtaining module 302, configured to obtain a 3D model of the target object based on depth information of the target object when the target object belongs to a preset object set; a display module 303 for displaying a 3D model of the target object; a first receiving module 304, configured to receive a first input of a terminal device user; a prompt output module 305, configured to output, in response to the first input, distance prompt information of the target object, where the distance prompt information is used to prompt a real-time distance between the target object and the terminal device user. The acquisition module 301, the acquisition module 302, the display module 303, the first receiving module 304, and the prompt output module 305 are connected in sequence.
Preferably, the terminal device further includes: the second receiving module is used for receiving the object information of the object to be searched, which is input by the terminal equipment user, before the acquisition module acquires the depth information of the target object within the preset view angle range through the TOF camera; the first storage module is used for storing the object information into an object set; the acquisition module is specifically configured to: and under the condition that the object information of the target object is matched with at least one object information in the object set, acquiring a 3D model of the target object based on the depth information of the object. The second receiving module, the first storage module, the acquisition module, the display module, the first receiving module and the prompt output module are connected in sequence.
Preferably, the object information includes at least one of: object name, object characteristics; in a case where the object information includes an object feature, the object feature includes at least one of an object type and an object size; wherein, the same object type corresponds to at least one object model.
Preferably, the terminal device further includes: and the second storage module is used for storing the depth information of the target object under the condition that the target object belongs to a preset object set after the acquisition module acquires the depth information of the target object within a preset view angle range through the TOF camera. The acquisition module, the second storage module, the acquisition module, the display module, the first receiving module and the prompt output module are connected in sequence.
Preferably, the obtaining module includes: a first sub-module for constructing a 3D model of the target object based on depth information of the target object; or the second sub-module is used for uploading the depth information of the target object to a server and acquiring the 3D model which is sent by the server and constructed based on the depth information of the target object.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
According to the terminal equipment provided by the embodiment of the invention, the depth information of the target object in the visual angle range is collected through the TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of a target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the target object, namely the real-time distance between the object to be searched and the terminal equipment user, quickly positioning the object to be searched, and outputting the distance prompt information to guide the terminal equipment user to reach the position where the object to be searched is located, so that the time and the energy for searching the object to be searched by the terminal equipment user are saved.
Referring to fig. 4, a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention is shown.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 4 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 410 for collecting depth information of a target object within a viewing angle range by a TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of the target object; receiving a first input of a terminal device user; and responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting the real-time distance between the target object and the terminal equipment user.
According to the terminal equipment provided by the embodiment of the invention, the depth information of the target object in the visual angle range is collected through the TOF camera; under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object; displaying a 3D model of a target object; receiving a first input of a terminal device user; in response to the first input, outputting distance prompt information of the target object, where the distance prompt information is used to prompt the target object, that is, a real-time distance between the object to be searched and a terminal device user, and can quickly locate the object to be searched, and outputting the distance prompt information to guide the terminal device user to reach a position where the object to be searched is located, so as to save time and energy consumed by the terminal device user to search the object to be searched, in the embodiment of the present invention, the radio frequency unit 401 may be used to receive and transmit information or during a call, receive and transmit signals, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 402, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the terminal apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, the Graphics processor 4041 Processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode, wherein the image capture device includes a TOF camera. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The terminal device 400 further comprises at least one sensor 405, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the terminal apparatus 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 408 is an interface for connecting an external device to the terminal apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 400 or may be used to transmit data between the terminal apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the terminal device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The terminal device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned embodiment of the object search method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the object search method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An object searching method is applied to a terminal device comprising a TOF camera, and is characterized by comprising the following steps:
collecting depth information of a target object within a visual angle range through a TOF camera;
under the condition that the target object belongs to a preset object set, acquiring a 3D model of the target object based on depth information of the target object;
displaying a 3D model of the target object;
receiving a first input of a terminal device user;
responding to the first input, outputting distance prompt information of the target object, wherein the distance prompt information is used for prompting a real-time distance between the target object and the terminal equipment user;
before the depth information of the target object within the preset view angle range is collected through the TOF camera, the method further includes:
receiving object information of an object to be searched, which is input by a terminal device user;
storing the object information into an object set;
the acquiring, based on depth information of the object, a 3D model of the target object when the target object belongs to a preset object set includes:
and under the condition that the object information of the target object is matched with at least one object information in the object set, acquiring a 3D model of the target object based on the depth information of the object.
2. The method of claim 1, wherein the object information comprises at least one of: object name, object characteristics;
in a case where the object information includes an object feature, the object feature includes at least one of an object type and an object size;
wherein, the same object type corresponds to at least one object model.
3. The method of claim 1, wherein after the collecting depth information of the target object within the preset view angle range by the TOF camera, the method further comprises:
and storing the depth information of the target object under the condition that the target object belongs to a preset object set.
4. The method of claim 1, wherein obtaining the 3D model of the target object based on the depth information of the target object comprises:
constructing a 3D model of the target object based on the depth information of the target object;
or uploading the depth information of the target object to a server, and acquiring a 3D model which is sent by the server and constructed based on the depth information of the target object.
5. The method of claim 1, wherein outputting distance hint information for the target object comprises:
in the process that the distance between the terminal equipment and the target object is gradually reduced, voice prompt information is played in real time until the distance between the terminal equipment and the target object is smaller than a preset threshold value;
the voice prompt information is used for prompting the distance between the terminal equipment user and the target object.
6. The utility model provides a terminal equipment, includes TOF camera, its characterized in that, terminal equipment still includes:
the acquisition module is used for acquiring the depth information of the target object within the view angle range through the TOF camera;
the acquisition module is used for acquiring a 3D model of the target object based on the depth information of the target object under the condition that the target object belongs to a preset object set;
a display module for displaying a 3D model of the target object;
the first receiving module is used for receiving a first input of a terminal equipment user;
a prompt output module, configured to output, in response to the first input, distance prompt information of the target object, where the distance prompt information is used to prompt a real-time distance between the target object and a user of the terminal device;
the terminal device further includes:
the second receiving module is used for receiving the object information of the object to be searched, which is input by the terminal equipment user, before the acquisition module acquires the depth information of the target object within the preset view angle range through the TOF camera;
the first storage module is used for storing the object information into an object set;
the acquisition module is specifically configured to: and under the condition that the object information of the target object is matched with at least one object information in the object set, acquiring a 3D model of the target object based on the depth information of the object.
7. The terminal device according to claim 6, wherein the object information includes at least one of: object name, object characteristics;
in a case where the object information includes an object feature, the object feature includes at least one of an object type and an object size;
wherein, the same object type corresponds to at least one object model.
8. The terminal device according to claim 6, wherein the terminal device further comprises:
and the second storage module is used for storing the depth information of the target object under the condition that the target object belongs to a preset object set after the acquisition module acquires the depth information of the target object within a preset view angle range through the TOF camera.
9. The terminal device of claim 6, wherein the obtaining module comprises:
a first sub-module for constructing a 3D model of the target object based on depth information of the target object; alternatively, the first and second electrodes may be,
and the second submodule is used for uploading the depth information of the target object to a server and acquiring a 3D model which is sent by the server and constructed based on the depth information of the target object.
10. The terminal device of claim 6, wherein the prompt output module is specifically configured to:
in the process that the distance between the terminal equipment and the target object is gradually reduced, voice prompt information is played in real time until the distance between the terminal equipment and the target object is smaller than a preset threshold value;
the voice prompt information is used for prompting the distance between the terminal equipment user and the target object.
11. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the object search method according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the object search method according to any one of claims 1 to 5.
CN201811204693.0A 2018-10-16 2018-10-16 Object searching method and terminal equipment Active CN109472825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811204693.0A CN109472825B (en) 2018-10-16 2018-10-16 Object searching method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811204693.0A CN109472825B (en) 2018-10-16 2018-10-16 Object searching method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109472825A CN109472825A (en) 2019-03-15
CN109472825B true CN109472825B (en) 2021-06-25

Family

ID=65664690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811204693.0A Active CN109472825B (en) 2018-10-16 2018-10-16 Object searching method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109472825B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112413827A (en) * 2020-11-23 2021-02-26 珠海格力电器股份有限公司 Intelligent air conditioner and information display method and device thereof
CN112911358B (en) * 2021-01-12 2023-01-20 海信视像科技股份有限公司 Laser television and human eye protection method based on laser television
CN113341737B (en) * 2021-05-18 2023-11-10 珠海格力电器股份有限公司 Control method, system, device, equipment and storage medium of intelligent household equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236520A (en) * 2010-05-06 2011-11-09 Lg电子株式会社 Mobile terminal and image display method therein
JP2016181181A (en) * 2015-03-24 2016-10-13 キヤノン株式会社 Image processing apparatus, image processing method, and program

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771808B1 (en) * 2000-12-15 2004-08-03 Cognex Corporation System and method for registering patterns transformed in six degrees of freedom using machine vision
JP2008275366A (en) * 2007-04-26 2008-11-13 Tokyo Institute Of Technology Stereoscopic 3-d measurement system
JP5673624B2 (en) * 2012-07-24 2015-02-18 カシオ計算機株式会社 Object search apparatus, method, and program
JP2015105899A (en) * 2013-12-02 2015-06-08 シャープ株式会社 Three-dimension measuring device
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
WO2018205230A1 (en) * 2017-05-11 2018-11-15 深圳前海达闼云端智能科技有限公司 Item search method and device, and robot
CN107463659B (en) * 2017-07-31 2020-07-17 Oppo广东移动通信有限公司 Object searching method and device
CN108052992A (en) * 2017-11-22 2018-05-18 同济大学 A kind of Intelligent indoor is looked for something storage system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236520A (en) * 2010-05-06 2011-11-09 Lg电子株式会社 Mobile terminal and image display method therein
JP2016181181A (en) * 2015-03-24 2016-10-13 キヤノン株式会社 Image processing apparatus, image processing method, and program

Also Published As

Publication number Publication date
CN109472825A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN110113528B (en) Parameter obtaining method and terminal equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN109409244B (en) Output method of object placement scheme and mobile terminal
CN109558000B (en) Man-machine interaction method and electronic equipment
WO2020253340A1 (en) Navigation method and mobile terminal
CN108881544B (en) Photographing method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN109472825B (en) Object searching method and terminal equipment
CN109521684B (en) Household equipment control method and terminal equipment
CN108984066B (en) Application icon display method and mobile terminal
CN111124223A (en) Application interface switching method and electronic equipment
CN111163449B (en) Application sharing method, first electronic device and computer-readable storage medium
CN111190515A (en) Shortcut panel operation method, device and readable storage medium
CN108924413B (en) Shooting method and mobile terminal
CN108062370B (en) Application program searching method and mobile terminal
CN109947345B (en) Fingerprint identification method and terminal equipment
CN111443764A (en) Separation state processing method and electronic equipment
CN108040003B (en) Reminding method and device
CN110928616A (en) Shortcut icon management method and electronic equipment
CN111147750B (en) Object display method, electronic device, and medium
CN111338526B (en) Screenshot method and device
CN112985373B (en) Path planning method and device and electronic equipment
CN110769153B (en) Image processing method and electronic equipment
CN110086916B (en) Photographing method and terminal
CN110489037B (en) Screen capturing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant