CN113109835B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN113109835B
CN113109835B CN202110280192.6A CN202110280192A CN113109835B CN 113109835 B CN113109835 B CN 113109835B CN 202110280192 A CN202110280192 A CN 202110280192A CN 113109835 B CN113109835 B CN 113109835B
Authority
CN
China
Prior art keywords
data
gradient
obstacle
target range
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110280192.6A
Other languages
Chinese (zh)
Other versions
CN113109835A (en
Inventor
卢晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110280192.6A priority Critical patent/CN113109835B/en
Publication of CN113109835A publication Critical patent/CN113109835A/en
Application granted granted Critical
Publication of CN113109835B publication Critical patent/CN113109835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an information processing method, which comprises the following steps: obtaining first data by a first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range; converting the first data into a plurality of groups of gradient data; determining whether there is an obstacle within the target range based on the plurality of sets of gradient data. Meanwhile, the application also provides electronic equipment.

Description

Information processing method and electronic equipment
Technical Field
The present application relates to information processing technologies, and in particular, to an information processing method and an electronic device.
Background
The mobile phone is used more and more frequently in daily life, and sometimes a user can walk and browse the mobile phone, but because of sight shielding and attention transfer, if defects such as pits exist on the road surface or obstacles exist on the travelling path, a certain risk is caused to the personal safety of the user. Therefore, how to remind the user when the traveling direction of the user has an obstacle is a problem that needs to be solved at present.
Disclosure of Invention
In view of this, an embodiment of the present application is expected to provide an information processing method and an electronic device.
In order to achieve the above purpose, the technical scheme of the application is realized as follows:
according to an aspect of the present application, there is provided an information processing method including:
obtaining first data by a first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range;
converting the first data into a plurality of groups of gradient data;
determining whether there is an obstacle within the target range based on the plurality of sets of gradient data.
In the above scheme, the method further comprises:
obtaining second data, wherein the second data represents displacement information of the electronic equipment;
and if the second data represent that the electronic equipment is in the travelling process, acquiring the first data in real time.
In the above scheme, the converting the first data into a plurality of sets of gradient data includes:
calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes;
calculating gradient values of the first data in the vertical direction to obtain a second group of gradient matrixes;
and obtaining a third group of gradient matrixes based on the square sum of the first group of gradient matrixes and the second group of gradient matrixes.
In the above aspect, the determining whether there is an obstacle in the target range based on the plurality of sets of gradient data includes:
calculating the difference value of any two adjacent gradient values in the third gradient matrix;
and if the calculated result represents that the difference value of any two adjacent gradient values is larger than a preset threshold value, determining that the target range has an obstacle.
In the above scheme, the method further comprises:
when determining that the object range has an obstacle, respectively calculating the difference value of two adjacent gradient values in the first gradient matrix and the second gradient matrix;
and determining the specific position of the obstacle according to the calculation results of the first gradient matrix and the second gradient matrix.
In the above aspect, when it is determined that the target range has an obstacle based on the plurality of sets of gradient data, the method further includes:
and outputting alarm information representing that the object range has barriers.
In the above solution, before the outputting the alarm information indicating that the target range has an obstacle, the method further includes:
starting an image acquisition function of the electronic equipment to acquire image information in the target range;
outputting alarm information representing that the object range has an obstacle, including:
and if the image information contains the obstacle, outputting alarm information representing that the obstacle exists in the target range.
In the above scheme, outputting the alarm information representing that the target range has the obstacle includes:
marking the position of the obstacle in the image information;
or outputting a prompt tone representing that the object range has an obstacle.
In the above scheme, the method further comprises:
and if the angular velocity change value representing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, determining that the electronic equipment is in the travelling process.
According to another aspect of the present application, there is provided an electronic apparatus including:
a first sensor having the ability to sense space; the sensing space forms a target range on the ground; obtaining, by the first sensor, first data for characterizing a set of depth data of the electronic device to the target range;
the computing unit is used for converting the first data into a plurality of groups of gradient data;
and a determining unit for determining whether there is an obstacle in the target range based on the plurality of sets of gradient data.
According to the information processing method and the electronic device, first data are obtained through the first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range; converting the first data into a plurality of groups of gradient data; determining whether there is an obstacle within the target range based on the plurality of sets of gradient data. Therefore, gradient change detection is carried out based on the depth data of the target range from the electronic equipment to the ground, whether an obstacle exists in the advancing direction of the user can be effectively determined, and accordingly personal safety coefficient of the user can be improved, and using risks of the user can be reduced.
Drawings
FIG. 1 is a schematic diagram of a flow implementation of an information processing method according to the present application;
FIG. 2 is a diagram of acquiring first data according to the present application;
FIG. 3 is a schematic diagram of a method for determining whether a road surface has an obstacle according to the present application;
FIG. 4 is a second schematic diagram of a method for determining whether a road surface has an obstacle according to the present application;
FIG. 5 is a schematic diagram showing the structural components of an electronic device according to the present application;
fig. 6 is a schematic diagram of a structural composition of an electronic device according to the present application.
Detailed Description
The technical scheme of the application is further elaborated below by referring to the drawings in the specification and the specific embodiments.
Fig. 1 is a schematic diagram of a flow implementation of an information processing method in the present application, as shown in fig. 1, including:
step 101, obtaining first data through a first sensor of an electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range;
in the application, the electronic equipment can be a mobile phone, a palm computer, an electronic book, a camera, a game machine, a sweeping robot, a meal delivery robot, an automatic driving vehicle and the like, a first sensor is arranged in the electronic equipment, the first sensor is provided with an induction space, when the first sensor is in a running state, the induction space can form a target range on the ground, and first data from the electronic equipment to the target range can be obtained through the first sensor.
Here, the first data may specifically refer to a set of depth data of the electronic device to the target range.
For example, the first sensor may be a Time Of Flight (TOF) sensor, and the TOF sensor has a conical sensing space, when the TOF sensor is in an operating state, the TOF sensor may emit infrared light toward the ground using a tiny laser, and the emitted infrared light forms a target area on the ground, such as a sector area or a circle area on the ground, and then the emitted infrared light bounces back from the ground and returns to the TOF sensor, and the TOF sensor may measure a distance from the electronic device to the target area on the ground, that is, a set Of depth data from the electronic device to the sector area or the circle area, according to a Time difference between the emission Of the light and the return Of the emitted infrared light to the TOF sensor after the reflection Of the infrared light from the ground. Here, the depth data includes a plurality of position point data of the electronic device to the fan-shaped area range or the circular area range.
Here, the input of the TOF sensor may be generally configured as a data input of m×n two-dimensional size, and the larger M, N, the higher the detection fineness, the detection target judgment capability having different finesses according to M, N size.
As shown in fig. 2, it is a 4*4 matrix in which there are 4 distance points per row, for a total of 4 rows. Here, N is 4 and m is 4.
The TOF sensor used in the present application is capable of very fast composing 3D images of a scene compared to other distance sensors (e.g. ultrasound or laser). For example, the ToF camera needs only one time to accomplish this. Furthermore, the ToF sensor is capable of accurately detecting objects in a short time, and is not affected by humidity, air pressure and temperature, making it suitable for indoor and outdoor use.
In the application, the electronic equipment is also provided with a second sensor, and the electronic equipment can also obtain second data through the second sensor, and the second data can specifically represent the displacement information of the electronic equipment. For example, the second sensor is an angular velocity gyroscope, by means of which displacement data of the electronic device in the direction X, Y, Z can be obtained. And if the second data represents that the electronic equipment is in the travelling process, triggering the first sensor in the electronic equipment to acquire the first data in real time.
Specifically, if the angular velocity change value characterizing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, it is determined that the electronic device is in the process of traveling. Therefore, the second data obtained by the second sensor can judge whether the user is in a walking browsing travel mode at the moment, and can be distinguished from other normal hand-held movement modes. By obtaining the first data only when the electronic device is in the process of traveling, the use power consumption of the electronic device can be reduced.
Step 102, converting the first data into a plurality of groups of gradient data;
in the application, the electronic equipment can also construct a plurality of groups of gradient matrixes based on the first data under the condition that the first data are obtained through the first sensor.
For example, a first set of gradient matrices, a second set of gradient matrices, and a third set of gradient matrices may be constructed based on the first data, wherein the first set of gradient matrices characterizes a row matrix of the first data, the second set of gradient matrices characterizes a column matrix of the first data, and the third set of gradient matrices characterizes a diagonal matrix of the first data;
specifically, after the electronic device obtains the first data, a first group of gradient matrixes can be obtained by calculating gradient values of the first data in the horizontal direction; obtaining a second group of gradient matrixes by calculating gradient values of the first data in the vertical direction; a third set of gradient matrices is then derived based on the sum of squares of the first set of gradient matrices and the second set of gradient matrices.
Step 103, determining whether an obstacle exists in the target range based on the plurality of groups of gradient data.
In the application, after the electronic equipment converts the first data into a plurality of groups of gradient data, at least the difference value of any two adjacent gradient values in a third group of gradient matrixes can be calculated; and if the calculated result represents that the difference value of any two adjacent gradient values is larger than a preset threshold value, determining that an obstacle exists in a target range formed on the ground. Then, when the obstacle is in the target range, respectively calculating the difference value of two adjacent gradient values in the first group of gradient matrixes and the second group of gradient matrixes; and determining the specific position of the obstacle according to the calculation results of the first group of gradient matrixes and the second group of gradient matrixes.
In the application, if the difference value of any two adjacent data in the first group of gradient matrixes, the second group of gradient matrixes and the third group of gradient matrixes exceeds a threshold value, determining the target data value exceeding the threshold value as the data value corresponding to the target position of the characterization obstacle. At this time, the road surface condition change or the occurrence area of the obstacle is represented.
Fig. 2 is a schematic diagram of acquiring first Data in the present application, as shown in fig. 2, in a traveling mode, when a user holds a mobile phone, along with a body or a shake of the hand, at a time t1 or a time t2, data returned by a TOF sensor of the mobile phone are Data of a position Data1 and a position Data2 respectively. It is obvious that the Data of the position Data1 and the position Data2 directly have position differences, and if the road surface condition or the obstacle is judged by the Data difference between the two positions, the system misjudgment is obviously caused. In the scheme of the application, only one set of depth Data of one position is calculated, for example, a set of depth Data (Data 2 is taken as an example here) returned by the TOF sensor for a flat road surface can be found, and the Data of the flat road surface has the characteristic of continuous gradient in two dimensions, as shown in a matrix on the left side of fig. 2, and any two adjacent Data values in the matrix are continuous. Then based on the first data, constructing the following three groups of gradient matrixes:
a first set of gradient matrices: C1C 1 i,j =abs(a i,j+1 -a i,j );
A second set of gradient matrices: C2C 2 i,j =abs(a i+1,j -a i,j );
Third set of gradient matrices: C3C 3 i,j =(a i,j+1 -a i,j ) 2 +(a i+1,j -a i,j ) 2
Wherein i is [1, M-1] representing the line data in the first data; j e [1, N-1], characterizes column data in the first data. Abs () is an absolute value taking operation. It means that the road surface is considered as an obstacle, whether it is concave or convex, and therefore the absolute value is taken here.
If some data in the three groups of matrixes C1, C2 and C3 jumps, the data represents the change of the road surface condition or the occurrence of an obstacle, and the data jump position represents the change of the road surface condition or the occurrence area of the obstacle.
Fig. 3 is a schematic diagram of determining whether there is an obstacle on the road surface according to the present application, as shown in fig. 3, if the position 2 in fig. 2 is taken as an example, and the position 2 is currently on a downhill road surface, but the road surface is smooth and free of obstacles and pits. The depth data obtained by the TOF sensor is shown in figure 3.
In fig. 3, TOF Data represents first Data (i.e. a set of depth Data of the phone from the location 2 to the ground) obtained by the TOF sensor, where the first Data includes Data of a downhill road (e.g. 10, 12) and Data of a flat road (e.g. 14, 16), and then, by the formula: C1C 1 i,j =abs(a i,j+1 -a i,j ) Calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes (C1), and obtaining a first group of gradient matrixes by the formula: C2C 2 i,j =abs(a i+1,j -a i,j ) Calculating gradient values of the first data in the vertical direction to obtain a second set of gradient matrices (C2), and passing through the formula: C3C 3 i,j =(a i,j+1 -a i,j ) 2 +(a i+1,j -a i,j ) 2 And calculating the square sum of the first group of gradient matrixes and the second group of gradient matrixes to obtain a third group of matrixes (C3).
It can be seen from the first set of matrices (C1), the second set of matrices (C2) and the third set of matrices (C3) that any two adjacent data in each matrix are very close, i.e. while now going from a flat road to a downhill road, the current downhill road is not considered to be a concave obstacle but to be a normal road. Thus, the system is prevented from being excessively sensitive to generate erroneous judgment.
Fig. 4 is a schematic diagram of determining whether there is an obstacle on the road surface according to the present application, as shown in fig. 4, if taking the position 2 in fig. 1 as an example, and there is a raised step or obstacle at the position 2, the depth data obtained by the TOF sensor is shown in fig. 4.
In FIG. 4, TOF Data represents first Data obtained by TOF sensor (i.e., handset at location 2)A set of depth data to the ground), it can be seen from the first data that the difference between any two adjacent data, although there is a difference, is not large, and at this time it is not possible to determine that the current road surface is obstructed. The formula is required to be passed: C1C 1 i,j =abs(a i,j+1 -a i,j ) Calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes (C1), and obtaining a first group of gradient matrixes by the formula: C2C 2 i,j =abs(a i+1,j -a i,j ) Calculating gradient values of the first data in the vertical direction to obtain a second set of gradient matrices (C2), and passing through the formula: C3C 3 i,j =(a i,j+1 -a i,j ) 2 +(a i+1,j -a i,j ) 2 And calculating the square sum of the first group of gradient matrixes and the second group of gradient matrixes to obtain a third group of matrixes (C3).
Here, it is seen from the first matrix (C1), the second matrix (C2) and the third matrix (C3) that several data suddenly become larger, that is, there is a gradient change in the row direction and the column direction at the position corresponding to the suddenly larger data, which indicates that there may be a ground bump or a pit or an obstacle, but from the matrix C1 and the matrix C2, although there are suddenly larger values, there is a smaller difference between the values and the adjacent values, so in order to avoid erroneous judgment of the system, the obstacle judgment is performed based on the difference between the adjacent two values (e.g., 4, 68;4, 72) in the third matrix (C3), because the matrix C3 is a diagonal matrix of the first data, which contains the total gradient change of the first data, and thus the gradient dimension involved is more comprehensive than the matrices C1 and C2.
If the difference between the suddenly larger value and the adjacent value in the third group of matrix (C3) is larger than a preset value (such as 32), the position 2 of the current road surface is considered to have an obstacle or a bulge, and then the specific position of the obstacle or the bulge at the position 2 is determined based on the first group of matrix (C1) and the second group of matrix (C2). Therefore, whether the road surface has the obstacle or not can be judged, and the specific position of the obstacle can be determined, so that the judgment precision of the obstacle is improved.
In the present application, if all of the three sets of matrices have suddenly increased values, but the difference between the suddenly increased value and the adjacent value is smaller than the preset value (e.g., 32), it is considered that no obstacle or protrusion exists at the position 2 of the current road surface, and it is considered that a normal gradient is possible.
In the application, when the electronic equipment determines that the obstacle exists in the target range of the road surface based on a plurality of groups of gradient data, the electronic equipment can also output alarm information representing that the obstacle exists in the target range. For example, the alarm information may be an alarm prompt tone output by a buzzer of the electronic device; alternatively, light output by the light sensor; or outputting the alarm information through a UI interface of a display screen of the electronic equipment.
In the application, before the electronic equipment outputs the alarm information representing the obstacle in the target range, the image acquisition function (such as a camera) of the electronic equipment can be started to acquire the image information in the target range; if the image information contains the obstacle, outputting alarm information representing that the object has the obstacle in the target range.
Here, the electronic device may output alert information indicating that the object has an obstacle in the object range, including but not limited to marking a position of the obstacle in the image information, or outputting a warning tone indicating that the object has an obstacle in the object range; or, by changing the transparency of the background of the UI interface of the display screen of the electronic device, the road surface image acquired by the rear camera of the electronic device is displayed on the UI interface when the UI interface is in a transparent state.
Therefore, whether the road surface has the obstacle or not is initially determined in a gradient matrix mode, and whether the road surface has the obstacle or not is determined again in a camera mode, so that the judgment precision of the road surface obstacle can be greatly improved, and the system is prevented from generating erroneous judgment.
Fig. 5 is a schematic diagram of the structural composition of an electronic device according to the present application, as shown in fig. 5, the electronic device at least includes:
a first sensor 501 having an inductive space; the sensing space can form a target range on the ground; obtaining, by the first sensor, first data for characterizing a set of depth data of the electronic device to the target range;
a computing unit 502, configured to convert the first data into a plurality of sets of gradient data;
a determining unit 503 for determining whether there is an obstacle in the target range based on the plurality of sets of gradient data.
In a preferred embodiment, the electronic device further comprises a second sensor 504 for obtaining second data, said second data being indicative of displacement information of said electronic device; if the second data characterizes the electronic device in the process of traveling, the first sensor 501 is triggered to acquire the first data in real time.
In a preferred scheme, if the angular velocity change value representing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, determining that the electronic equipment is in the travelling process.
In a preferred embodiment, the calculating unit 502 is specifically configured to calculate a gradient value of the first data in a horizontal direction, so as to obtain a first set of gradient matrices; calculating gradient values of the first data in the vertical direction to obtain a second group of gradient matrixes; and deriving a third set of gradient matrices based on a sum of squares of the first set of gradient matrices and the second set of gradient matrices.
In a preferred embodiment, the calculating unit 502 is specifically further configured to calculate a difference value between any two adjacent gradient values in the third set of gradient matrices; if the calculation result indicates that the difference value between any two adjacent gradient values is greater than the preset threshold value, the trigger determining unit 503 determines that the target range has an obstacle.
In a preferred embodiment, the calculating unit 502 is specifically further configured to calculate, when it is determined that the target range has an obstacle, a difference value between two adjacent gradient values in the first set of gradient matrices and the second set of gradient matrices, respectively; the determining unit 503 is specifically further configured to determine a specific position of the obstacle according to the calculation results of the first set of gradient matrices and the second set of gradient matrices.
In a preferred embodiment, the electronic device further comprises an output unit 505 for outputting alarm information characterizing that an obstacle is present in the target range when it is determined that the obstacle is present in the target range based on the plurality of sets of gradient data.
In a preferred aspect, the electronic device further includes: the camera 506 is configured to, when it is determined that the target range has an obstacle based on the multiple sets of gradient data, turn on an image acquisition function of the electronic device to acquire image information in the target range;
the output unit 505 outputs alarm information indicating that there is an obstacle in the target range, specifically when the image information includes the obstacle.
In a preferred embodiment, when the output unit 505 outputs alarm information representing that an obstacle exists in the target range, the position of the obstacle is specifically marked in the image information; or outputting a prompt tone representing that the object range has an obstacle.
It should be noted that: in the electronic device provided in the above embodiment, when information reminding is performed, only the division of each program module is used for illustration, in practical application, the processing allocation may be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processing described above. In addition, the electronic device provided in the foregoing embodiment and the information processing method embodiment provided in the foregoing embodiment belong to the same concept, and specific implementation processes of the electronic device are detailed in the method embodiment and are not repeated herein.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor, when executing the computer program, performs: obtaining first data by a first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range;
converting the first data into a plurality of groups of gradient data;
determining whether there is an obstacle within the target range based on the plurality of sets of gradient data.
The processor is further configured to execute, when the computer program is executed: obtaining second data, wherein the second data represents displacement information of the electronic equipment;
and if the second data represent that the electronic equipment is in the travelling process, acquiring the first data in real time.
The processor is further configured to execute, when the computer program is executed: calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes; calculating gradient values of the first data in the vertical direction to obtain a second group of gradient matrixes; and obtaining a third group of gradient matrixes based on the square sum of the first group of gradient matrixes and the second group of gradient matrixes.
The processor is further configured to execute, when the computer program is executed: calculating the difference value of any two adjacent gradient values in the third group of gradient matrixes; and if the calculated result represents that the difference value of any two adjacent gradient values is larger than a preset threshold value, determining that the target range has an obstacle.
The processor is further configured to execute, when the computer program is executed: when determining that the object range has an obstacle, respectively calculating the difference value of two adjacent gradient values in the first group of gradient matrixes and the second group of gradient matrixes; and determining the specific position of the obstacle according to the calculation results of the first group of gradient matrixes and the second group of gradient matrixes.
The processor is further configured to execute, when the computer program is executed: and outputting alarm information representing that the object range has barriers.
The processor is further configured to execute, when the computer program is executed: starting an image acquisition function of the electronic equipment to acquire image information in the target range;
outputting alarm information representing that the object range has an obstacle, including:
and if the image information contains the obstacle, outputting alarm information representing that the obstacle exists in the target range.
The processor is further configured to execute, when the computer program is executed: marking the position of the obstacle in the image information; or outputting a prompt tone representing that the object range has an obstacle.
The processor is further configured to execute, when the computer program is executed: and if the angular velocity change value representing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, determining that the electronic equipment is in the travelling process.
Fig. 6 is a schematic diagram of a second structural component of an electronic device 600 according to the present application, which may be an electronic book, a mobile phone, a computer, an information transceiver, a game console, a tablet device, a fitness device, a personal digital assistant, a camera, a sweeping robot, a meal delivery robot, an autopilot vehicle, or other terminals with TOF sensors. The electronic device 600 shown in fig. 6 includes: at least one processor 601, a memory 602, at least one network interface 604, and a user interface 603. The various components in the electronic device 600 are coupled together by a bus system 605. It is understood that the bus system 605 is used to enable connected communications between these components. The bus system 605 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 605 in fig. 6.
The user interface 603 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
It is to be appreciated that the memory 602 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus RandomAccess Memory). The memory 602 described in embodiments of the application is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 602 in embodiments of the application is used to store various types of data to support the operation of the electronic device 600. Examples of such data include: any computer programs for operation on the electronic device 600, such as an operating system 6021 and application programs 6022; contact data; telephone book data; a message; a picture; audio, etc. The operating system 6021 contains various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 6022 may include various application programs such as a Media Player (Media Player), a Browser (Browser), and the like for implementing various application services. The program for implementing the method of the embodiment of the present application may be included in the application 6022.
The method disclosed in the above embodiment of the present application may be applied to the processor 601 or implemented by the processor 601. The processor 601 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 601 or instructions in the form of software. The processor 601 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 601 may implement or perform the methods, steps and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the application can be directly embodied in the hardware of the decoding processor or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium in the memory 602 and the processor 601 reads information in the memory 602 and in combination with its hardware performs the steps of the method as described above.
In an exemplary embodiment, the electronic Device 600 may be implemented by one or more application specific integrated circuits (ASICs, application Specific Integrated Circuit), DSPs, programmable Logic devices (PLDs, programmable Logic Device), complex programmable Logic devices (CPLD, complexProgrammable Logic devices), field-programmable gate arrays (FPGAs), field-Programmable Gate Array), general purpose processors, controllers, microcontrollers (MCUs, micro Controller Unit), microprocessors (micro processors), or other electronic components for performing the aforementioned methods.
In an exemplary embodiment, the present application also provides a computer-readable storage medium, such as a memory 602, comprising a computer program executable by the processor 601 of the electronic device 600 to perform the steps of the method described above. The computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above-described memories, such as a mobile phone, computer, tablet device, personal digital assistant, or the like.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs: obtaining first data by a first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range;
converting the first data into a plurality of groups of gradient data;
determining whether there is an obstacle within the target range based on the plurality of sets of gradient data.
The computer program, when executed by the processor, further performs: obtaining second data, wherein the second data represents displacement information of the electronic equipment; and if the second data represent that the electronic equipment is in the travelling process, acquiring the first data in real time.
The computer program, when executed by the processor, further performs: calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes; calculating gradient values of the first data in the vertical direction to obtain a second group of gradient matrixes; and obtaining a third group of gradient matrixes based on the square sum of the first group of gradient matrixes and the second group of gradient matrixes.
The computer program, when executed by the processor, further performs: calculating the difference value of any two adjacent gradient values in the third group of gradient matrixes; and if the calculated result represents that the difference value of any two adjacent gradient values is larger than a preset threshold value, determining that the target range has an obstacle.
The computer program, when executed by the processor, further performs: when determining that the object range has an obstacle, respectively calculating the difference value of two adjacent gradient values in the first group of gradient matrixes and the second group of gradient matrixes; and determining the specific position of the obstacle according to the calculation results of the first group of gradient matrixes and the second group of gradient matrixes.
The computer program, when executed by the processor, further performs: and outputting alarm information representing that the object range has barriers.
The computer program, when executed by the processor, further performs: starting an image acquisition function of the electronic equipment to acquire image information in the target range; and if the image information contains the obstacle, outputting alarm information representing that the obstacle exists in the target range.
The computer program, when executed by the processor, further performs: marking the position of the obstacle in the image information; or outputting a prompt tone representing that the object range has an obstacle.
The computer program, when executed by the processor, further performs: and if the angular velocity change value representing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, determining that the electronic equipment is in the travelling process.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An information processing method, the method comprising:
obtaining first data by a first sensor of the electronic device; wherein the first sensor has an inductive space; the sensing space forms a target range on the ground, and the first data is used for representing a set of depth data from the electronic equipment to the target range;
converting the first data into a plurality of sets of gradient data, wherein the plurality of sets of gradient data comprises: a first set of gradient matrices for characterizing a row matrix of the first data, a second set of gradient matrices for characterizing a column matrix of the first data, and a third set of gradient matrices for characterizing a diagonal matrix of the first data;
and determining whether an obstacle exists in the target range or not based on the difference value of any two adjacent gradient data in the plurality of groups of gradient data.
2. The method of claim 1, further comprising:
obtaining second data, wherein the second data represents displacement information of the electronic equipment;
and if the second data represent that the electronic equipment is in the travelling process, acquiring the first data in real time.
3. The method of claim 1, the converting the first data into multiple sets of gradient data, comprising:
calculating the gradient value of the first data in the horizontal direction to obtain a first group of gradient matrixes;
calculating gradient values of the first data in the vertical direction to obtain a second group of gradient matrixes;
and obtaining a third group of gradient matrixes based on the square sum of the first group of gradient matrixes and the second group of gradient matrixes.
4. The method of claim 3, the determining whether there is an obstacle within the target range based on the plurality of sets of gradient data, comprising:
calculating the difference value of any two adjacent gradient values in the third group of gradient matrixes;
and if the calculated result represents that the difference value of any two adjacent gradient values is larger than a preset threshold value, determining that the target range has an obstacle.
5. The method of claim 4, further comprising:
when determining that the object range has an obstacle, respectively calculating the difference value of two adjacent gradient values in the first group of gradient matrixes and the second group of gradient matrixes;
and determining the specific position of the obstacle according to the calculation results of the first group of gradient matrixes and the second group of gradient matrixes.
6. The method of claim 1, when determining that there is an obstacle within the target range based on the plurality of sets of gradient data, the method further comprising:
and outputting alarm information representing that the object range has barriers.
7. The method of claim 6, further comprising, prior to the outputting alert information characterizing that there is an obstacle within the target range:
starting an image acquisition function of the electronic equipment to acquire image information in the target range;
outputting alarm information representing that the object range has an obstacle, including:
and if the image information contains the obstacle, outputting alarm information representing that the obstacle exists in the target range.
8. The method of claim 7, outputting alert information characterizing an obstacle within the target range, comprising:
marking the position of the obstacle in the image information;
or outputting a prompt tone representing that the object range has an obstacle.
9. The method of claim 2, further comprising:
and if the angular velocity change value representing the X direction and the Y direction in the second data is smaller than the angular velocity change value in the Z direction, determining that the electronic equipment is in the travelling process.
10. An electronic device, comprising:
a first sensor having an inductive space; the sensing space can form a target range on the ground; obtaining, by the first sensor, first data for characterizing a set of depth data of the electronic device to the target range;
a computing unit, configured to convert the first data into a plurality of sets of gradient data, where the plurality of sets of gradient data includes: a first set of gradient matrices for characterizing a row matrix of the first data, a second set of gradient matrices for characterizing a column matrix of the first data, and a third set of gradient matrices for characterizing a diagonal matrix of the first data;
and the determining unit is used for determining whether an obstacle exists in the target range or not based on the difference value of any two adjacent gradient data in the plurality of groups of gradient data.
CN202110280192.6A 2021-03-16 2021-03-16 Information processing method and electronic equipment Active CN113109835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280192.6A CN113109835B (en) 2021-03-16 2021-03-16 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280192.6A CN113109835B (en) 2021-03-16 2021-03-16 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113109835A CN113109835A (en) 2021-07-13
CN113109835B true CN113109835B (en) 2023-08-18

Family

ID=76711359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280192.6A Active CN113109835B (en) 2021-03-16 2021-03-16 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113109835B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106352870A (en) * 2016-08-26 2017-01-25 深圳微服机器人科技有限公司 Method and device for positioning targets
CN108645409A (en) * 2018-05-14 2018-10-12 深圳万发创新进出口贸易有限公司 One kind being based on unpiloted driving safety system
CN110070570A (en) * 2019-03-20 2019-07-30 重庆邮电大学 A kind of obstacle detection system and method based on depth information
CN110674705A (en) * 2019-09-05 2020-01-10 北京智行者科技有限公司 Small-sized obstacle detection method and device based on multi-line laser radar
CN112034837A (en) * 2020-07-16 2020-12-04 珊口(深圳)智能科技有限公司 Method for determining working environment of mobile robot, control system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3822515B2 (en) * 2002-03-29 2006-09-20 株式会社東芝 Obstacle detection device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106352870A (en) * 2016-08-26 2017-01-25 深圳微服机器人科技有限公司 Method and device for positioning targets
CN108645409A (en) * 2018-05-14 2018-10-12 深圳万发创新进出口贸易有限公司 One kind being based on unpiloted driving safety system
CN110070570A (en) * 2019-03-20 2019-07-30 重庆邮电大学 A kind of obstacle detection system and method based on depth information
CN110674705A (en) * 2019-09-05 2020-01-10 北京智行者科技有限公司 Small-sized obstacle detection method and device based on multi-line laser radar
CN112034837A (en) * 2020-07-16 2020-12-04 珊口(深圳)智能科技有限公司 Method for determining working environment of mobile robot, control system and storage medium

Also Published As

Publication number Publication date
CN113109835A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
US10228693B2 (en) Generating simulated sensor data for training and validation of detection models
US20190050711A1 (en) Method, storage medium and electronic device for detecting vehicle crashes
JP7105383B2 (en) Image processing method, device, storage medium and electronic equipment
US20180211120A1 (en) Training An Automatic Traffic Light Detection Model Using Simulated Images
CN109739223B (en) Robot obstacle avoidance control method and device, terminal device and storage medium
CN108062095A (en) The object tracking merged in probabilistic framework using sensor
JP2017159881A (en) Recognition result presentation device, recognition result presentation method and autonomous movable body
JP5358414B2 (en) Fragile state determination device, method and program
CN101933326B (en) Vehicle periphery monitoring device, and vehicle
CN102428427A (en) Displaying pie charts in a limited display area
KR20210140758A (en) Image processing methods, devices, electronic devices and computer program products
EP3889825A1 (en) Vehicle lane line detection method, vehicle, and computing device
CN112200129A (en) Three-dimensional target detection method and device based on deep learning and terminal equipment
CN112001886A (en) Temperature detection method, device, terminal and readable storage medium
CN104932674A (en) Impact Detection Circuit, Physical Quantity Detection Device, Electronic Apparatus, Moving Object, And Impact Detection Method
CN107499204A (en) A kind of method and apparatus that information alert is carried out in vehicle
CN110598980B (en) Risk assessment method and device for traffic scene
CN113109835B (en) Information processing method and electronic equipment
CN112257542A (en) Obstacle sensing method, storage medium, and electronic device
KR20150049934A (en) A Method for Filtering Ground Data and An Apparatus thereof
CN113468678B (en) Method and device for calculating accuracy of automatic driving algorithm
JP6185327B2 (en) Vehicle rear side warning device, vehicle rear side warning method, and other vehicle distance detection device
CN115690374B (en) Interaction method, device and equipment based on model edge ray detection
CN115546130A (en) Height measuring method and device for digital twins and electronic equipment
CN109823344A (en) Drive reminding method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant