WO2024037673A1 - Artificial intelligence system for predicting object distance - Google Patents
Artificial intelligence system for predicting object distance Download PDFInfo
- Publication number
- WO2024037673A1 WO2024037673A1 PCT/CO2023/000013 CO2023000013W WO2024037673A1 WO 2024037673 A1 WO2024037673 A1 WO 2024037673A1 CO 2023000013 W CO2023000013 W CO 2023000013W WO 2024037673 A1 WO2024037673 A1 WO 2024037673A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- artificial intelligence
- process according
- intelligence process
- input nodes
- neural network
- Prior art date
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 claims abstract description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 2
- 230000003750 conditioning effect Effects 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/484—Transmitters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- the present invention belongs to the field of measurements implemented through artificial intelligence. Particularly, it is related to measurements carried out in underwater environments.
- Patent CN112257566 “Artificial intelligence method for distance measurement and target identification based on big data” was also found.
- the invention belongs to the technical field of target recognition and distance measurement, and in particular relates to an artificial intelligence target recognition and distance measurement method based on big data, which is higher in recognition accuracy and higher in speed .
- the method comprises the following steps: preprocessing of a received signal; generating an anchor box to identify the target through a K-Means clustering algorithm; building a convolutional neural network branch, and defining a parameter layer of the convolutional neural network; and testing the arrival of the chirp signal in the neural network evaluation model using the test set, outputting a signal arrival time estimation result of the chirp signal and obtaining the horizontal distance between the target and the receiver through the input image information.
- the invention that is intended to be protected does not work based on the calculation of the travel time of a signal. This constitutes an important technical advantage since it does not require a light signal, which has numerous drawbacks when it comes to aquatic media that affect its reflection and refraction, thus generating calculation errors in the distance to be measured.
- the solution includes the use of artificial intelligence that is made up of a neural network that processes the data of an image captured by a simple camera and, based on certain image conditioning processes, is capable of predicting the distance that exists between the lens of the device with which the image was captured and the objective.
- Figure 2 Shows a “black box” type diagram of the inputs and outputs of the implemented process. Detailed description of the invention
- the solution comprises the use of artificial intelligence that is composed of a multilayer neural network with a layer with at least five input nodes, at least one hidden layer and a layer with at least one output node.
- the training of the model is carried out by introducing the variables that will later be collected by the input nodes and assigning a calculated and true value in centimeters for each combination of parameters.
- the training is complemented by a compilation function to which an optimizer consisting of an extension of stochastic gradient descent (Adam's optimization algorithm) is associated. Additionally, a statistical analysis algorithm is applied to determine the average squared error, which allows you to differentiate the estimated value from the real one.
- the process begins with the collection of data obtained by means of any image capture device that is capable of operating in underwater environments.
- the collected data is divided into five values, four of which are preprocessed to obtain the corresponding functions provided to the input nodes and which are characterized as follows.
- a first node receives the pixel stress level obtained through a Laplacian method, where a separating linear filter is applied that executes a mathematical operation for each row and each column, then each value is multiplied by a delta value. This is done for each axis (x, y) and the final result is made into 2 matrices. The 2 resulting matrices are added and resulting in a single matrix to which an average is applied, and results in a numerical value.
- the second node receives the degree of standard deviation of the stress value of the pixels of the matrix, where the Laplace transform is applied on the selected area of interest, this transform is nothing more than the multiplication of matrices under rules of the first and second derivative. This procedure results in a matrix that goes through a standard deviation method and finally this value is squared in order to obtain the variance.
- the third node receives the obtaining of the second derivative in each of the axes; from the application of the tenengrad method or TENG algorithm which is based on the application of the first and second derivative over the defined area of interest. This is done for both axes (x,y) and finally the average is obtained as a numerical value.
- the fourth node receives the normalized variance level, for this the standard deviation is applied to the defined area of interest. With this standard deviation, the variance is then obtained and divided by the average obtained from this area.
- the fifth and last node receives the height of the number of pixels of the object as represented in the matrix, but does not preprocess the received data.
- the collected data is processed in the hidden layer that transmits the results to the output node.
- the results are communicated by the output layer in a size value in cm.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The present invention relates to a solution comprising the use of artificial intelligence formed by a neural network that processes data from an image captured by a simple camera and, based on certain conditioning processes of the image, is capable of predicting the distance between the lens of the device used to capture the image and the object.
Description
SISTEMA DE INTELIGENCIA ARTIFICIAL PARA PRONOSTICAR DISTANCIA DE OBJETOS ARTIFICIAL INTELLIGENCE SYSTEM TO PREDICT DISTANCE OF OBJECTS
Campo de la invención field of invention
La presente invención pertenece al campo de las mediciones implementadas a través de inteligencia artificial. Particularmente, está relacionada con las mediciones realizadas en entornos submarinos. The present invention belongs to the field of measurements implemented through artificial intelligence. Particularly, it is related to measurements carried out in underwater environments.
Estado de la técnica State of the art
En el estado de la técnica se encontró la patente KR20190114937 “Método para la medición de distancia en esquema TOF y sistema de conducción autónoma que usa el mismo”. La solución provee un método para calcular las distancias basado en el tiempo transcurrido entre la emisión y la recepción del haz de luz infrarroja emitido por la cámara. In the state of the art, the patent KR20190114937 “Method for measuring distance in TOF scheme and autonomous driving system that uses the same” was found. The solution provides a method to calculate distances based on the time elapsed between the emission and reception of the infrared light beam emitted by the camera.
También se encontró la Patente CN112257566 “Método de inteligencia artificial para medición de distancia identificación de objetivos basado en big data”. La invención pertenece al campo técnico del reconocimiento de objetivos y la medición de distancia, y en particular se relaciona con un objetivo de inteligencia artificial método de reconocimiento y medición de distancia basado en big data, que es mayor en precisión de reconocimiento y mayor en velocidad. El método comprende los siguientes pasos: preprocesamiento de una señal recibida; generando un anclaje caja para identificar el objetivo a través de un algoritmo de agrupamiento K-Means; construyendo una rama de red neuronal convolucional, y definiendo una capa de parámetros de la red neuronal convolucional; y probando la llegada de la señal chirp en el modelo de evaluación de la red neuronal utilizando el conjunto de prueba, emitiendo un resultado de estimación del tiempo de llegada de la señal de la señal chirp y obtener la distancia horizontal entre el objetivo y el receptor a través de la información de la imagen de entrada. Patent CN112257566 “Artificial intelligence method for distance measurement and target identification based on big data” was also found. The invention belongs to the technical field of target recognition and distance measurement, and in particular relates to an artificial intelligence target recognition and distance measurement method based on big data, which is higher in recognition accuracy and higher in speed . The method comprises the following steps: preprocessing of a received signal; generating an anchor box to identify the target through a K-Means clustering algorithm; building a convolutional neural network branch, and defining a parameter layer of the convolutional neural network; and testing the arrival of the chirp signal in the neural network evaluation model using the test set, outputting a signal arrival time estimation result of the chirp signal and obtaining the horizontal distance between the target and the receiver through the input image information.
En los antecedentes encontrados en el estado de la técnica se evidencia un punto común que consiste en el cálculo de la distancia a partir del tiempo tomado por una señal que se
emite y se recibe. En el caso de la solución que se provee, la misma realiza el cálculo directamente a partir de una inteligencia artificial que tiene cómo entrar a una imagen la cual es preprocesada de tal manera que permite calcular la distancia. In the background found in the state of the art, a common point is evident which consists of the calculation of the distance from the time taken by a signal that is emits and is received. In the case of the solution provided, it performs the calculation directly from an artificial intelligence that has how to enter an image which is preprocessed in such a way that it allows the distance to be calculated.
Así, la invención que se pretende proteger no funciona a partir del cálculo del tiempo del recorrido de una señal. Esto se constituye como una ventaja técnica importante ya que no necesita de una señal luminosa, misma que presenta numerosos inconvenientes cuando se trata de medios acuáticos que afectan su reflexión y refracción, generando así errores de cálculo en la distancia que se pretende medir. Thus, the invention that is intended to be protected does not work based on the calculation of the travel time of a signal. This constitutes an important technical advantage since it does not require a light signal, which has numerous drawbacks when it comes to aquatic media that affect its reflection and refraction, thus generating calculation errors in the distance to be measured.
Adicionalmente, otra de las ventajas técnicas fundamentales que presenta nuestra solución, encontramos la posibilidad de usar una sola cámara regular y por tanto, se puede prescindir de métodos o equipos con estereovisión. Additionally, another of the fundamental technical advantages that our solution presents, we find the possibility of using a single regular camera and therefore, methods or equipment with stereovision can be dispensed with.
Finalmente, la naturaleza del método provisto ofrece la posibilidad de acoplarlo a cualquier dispositivo de medición en entornos submarinos, esta intercambiabilidad se debe a que no depende de un dispositivo de captura de imagen concreto. Finally, the nature of the provided method offers the possibility of coupling it to any measurement device in underwater environments, this interchangeability is due to the fact that it does not depend on a specific image capture device.
Breve descripción de la invención Brief description of the invention
La solución comprende el uso de una inteligencia artificial que se compone de una Red neuronal que procesa los datos de una imagen capturada por una cámara simple y a partir de ciertos procesos de acondicionamiento de la imagen es capaz de pronosticar la distancia que existe entre el lente del dispositivo con que se capturó la imagen y el objetivo. The solution includes the use of artificial intelligence that is made up of a neural network that processes the data of an image captured by a simple camera and, based on certain image conditioning processes, is capable of predicting the distance that exists between the lens of the device with which the image was captured and the objective.
Breve descripción de las figuras Brief description of the figures
Figura 1. Muestra un flujograma del proceso Figure 1. Shows a flowchart of the process
Figura 2. Muestra un diagrama tipo “caja negra” de las entradas y salidas del proceso implementado.
Descripción detallada de la invención Figure 2. Shows a “black box” type diagram of the inputs and outputs of the implemented process. Detailed description of the invention
La solución comprende el uso de una inteligencia artificial que se compone de una red neuronal multicapa con una capa con, al menos cinco nodos de entrada, al menos una capa oculta y una capa con al menos un nodo de salida. The solution comprises the use of artificial intelligence that is composed of a multilayer neural network with a layer with at least five input nodes, at least one hidden layer and a layer with at least one output node.
El entrenamiento del modelo se realiza introduciendo las variables que posteriormente serán recogidas por los nodos de entrada y asignando un valor calculado y cierto en centímetros para cada combinación de parámetros. El entrenamiento se complementa con una función de compilación a la cual se asocia un optimizador consistente en una extensión del descenso de gradiente estocástico (algoritmo de optimización de Adam). Adicionalmente, se aplica un algoritmo de análisis estadístico para derterminar el promedio de error cuadrático, que le permite diferenciar el valor estimado del real. The training of the model is carried out by introducing the variables that will later be collected by the input nodes and assigning a calculated and true value in centimeters for each combination of parameters. The training is complemented by a compilation function to which an optimizer consisting of an extension of stochastic gradient descent (Adam's optimization algorithm) is associated. Additionally, a statistical analysis algorithm is applied to determine the average squared error, which allows you to differentiate the estimated value from the real one.
El proceso comienza con la recolección de datos obtenidos por medio de un dispositivo de captura de imágenes cualquiera que sea capaz de funcionar en entornos submarinos. Los datos recolectados, se dividen en cinco valores de los cuales cuatro se preprocesan para obtener las correspondientes funciones suministradas a los nodos de entrada y que se caracterizan de la siguiente forma. The process begins with the collection of data obtained by means of any image capture device that is capable of operating in underwater environments. The collected data is divided into five values, four of which are preprocessed to obtain the corresponding functions provided to the input nodes and which are characterized as follows.
Un primer nodo recibe el nivel de estrés de píxeles obtenido mediante un método laplaciano, donde se aplica un filtro lineal separador que ejecuta una operación matemática por cada fila y cada columna, luego cada valor es multiplicado por un valor delta. Esto se hace por cada eje (x, y) y el resultado final se hacen 2 matrices. Las 2 matrices resultantes son sumadas y resultando en una sola matriz a la cual se le aplica un promedio, y resulta en un valor numérico. A first node receives the pixel stress level obtained through a Laplacian method, where a separating linear filter is applied that executes a mathematical operation for each row and each column, then each value is multiplied by a delta value. This is done for each axis (x, y) and the final result is made into 2 matrices. The 2 resulting matrices are added and resulting in a single matrix to which an average is applied, and results in a numerical value.
El segundo nodo recibe el grado de desviación estándar del valor del estrés de los píxeles de la matriz, donde se aplica la transformada de Laplace sobre el área de interés seleccionada, esta transformada no es más que la multiplicación de matrices bajo reglas de la primera y segunda derivada. Este procedimiento resulta en una matriz que pasa por un método de desviación estándar y finalmente este valor es elevado al cuadrado con el fin de obtener la varianza. The second node receives the degree of standard deviation of the stress value of the pixels of the matrix, where the Laplace transform is applied on the selected area of interest, this transform is nothing more than the multiplication of matrices under rules of the first and second derivative. This procedure results in a matrix that goes through a standard deviation method and finally this value is squared in order to obtain the variance.
El tercer nodo recibe la obtención de la segunda derivada en cada uno de los ejes; a partir de la aplicación del método tenengrad o algoritmo TENG que se basa en la aplicación de la
primera y segunda derivada sobre el área de interés definida. Esto se hace para los dos ejes (x,y) y finalmente se obtiene el promedio como valor numérico. The third node receives the obtaining of the second derivative in each of the axes; from the application of the tenengrad method or TENG algorithm which is based on the application of the first and second derivative over the defined area of interest. This is done for both axes (x,y) and finally the average is obtained as a numerical value.
Finalmente, el cuarto nodo recibe el nivel de varianza normalizado, para ello se aplica la desviación estándar sobre el área de interés definida. Con esta desviación estándar se obtiene luego la varianza y se divide entre el promedio obtenido de este área. Finally, the fourth node receives the normalized variance level, for this the standard deviation is applied to the defined area of interest. With this standard deviation, the variance is then obtained and divided by the average obtained from this area.
El quinto y último nodo, recibe el alto del número de píxeles del objeto según se representa en la matriz, pero no realiza preprocesameinto de los datos recibidos. The fifth and last node receives the height of the number of pixels of the object as represented in the matrix, but does not preprocess the received data.
Los datos recolectados, son procesados en la capa oculta que transmite los resultados al nodo de salida. The collected data is processed in the hidden layer that transmits the results to the output node.
Los resultados son comunicados por la capa de salida en un valor de tamaño en cm.
The results are communicated by the output layer in a size value in cm.
Claims
1. Proceso de inteligencia artificial aplicado a una red neuronal de pronóstico de tamaño de objetos, caracterizado porque comprende una red neuronal multicapa, prealimentada con cinco nodos de entrada, una capa oculta y un nodo de salida dentro de un entorno de desarrollo integrado. 1. Artificial intelligence process applied to an object size forecasting neural network, characterized in that it comprises a multilayer neural network, pre-fed with five input nodes, a hidden layer and an output node within an integrated development environment.
2. Proceso de inteligencia artificial de acuerdo a la reivindicación anterior, en donde los nodos de entrada corresponden a: el nivel de estrés de píxeles; el grado de desviación estándar del valor de estrés de píxeles; la obtención de la segunda derivada en cada uno de los ejes; el nivel de varianza normalizado; y el alto del número de píxeles del espacio evaluado. 2. Artificial intelligence process according to the previous claim, wherein the input nodes correspond to: the pixel stress level; the degree of standard deviation of the pixel stress value; obtaining the second derivative in each of the axes; the normalized level of variance; and the height of the number of pixels of the evaluated space.
3. Proceso de inteligencia artificial de acuerdo a la reivindicación anterior, en donde las variables de los nodos de entrada son obtenidas a través de una imagen captada con una cámara. 3. Artificial intelligence process according to the previous claim, wherein the variables of the input nodes are obtained through an image captured with a camera.
4. Proceso de inteligencia artificial de acuerdo a las reivindicaciones 1 y 2, en donde el nivel de estrés de los píxeles es determinado a través de un método laplaciano. 4. Artificial intelligence process according to claims 1 and 2, wherein the stress level of the pixels is determined through a Laplacian method.
5. Proceso de inteligencia artificial de acuerdo a la reivindicación 1 , caracterizado porque la capa oculta comprende un nodo que procesa la información obtenida de los nodos de entrada mediante una función de compilación y un algoritmo de optimización del descenso de gradiente estocástico. 5. Artificial intelligence process according to claim 1, characterized in that the hidden layer comprises a node that processes the information obtained from the input nodes through a compilation function and a stochastic gradient descent optimization algorithm.
6. Proceso de inteligencia artificial de acuerdo a la reivindicación 1 , en donde el nodo de salida entrega una respuesta que comprende un valor expresado en centímetros.
6. Artificial intelligence process according to claim 1, wherein the output node delivers a response that comprises a value expressed in centimeters.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CONC2022/0011603 | 2022-08-17 | ||
CONC2022/0011603A CO2022011603A1 (en) | 2022-08-17 | 2022-08-17 | Artificial intelligence process to predict the size of objects |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024037673A1 true WO2024037673A1 (en) | 2024-02-22 |
Family
ID=89940788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CO2023/000013 WO2024037673A1 (en) | 2022-08-17 | 2023-08-16 | Artificial intelligence system for predicting object distance |
Country Status (2)
Country | Link |
---|---|
CO (1) | CO2022011603A1 (en) |
WO (1) | WO2024037673A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10733755B2 (en) * | 2017-07-18 | 2020-08-04 | Qualcomm Incorporated | Learning geometric differentials for matching 3D models to objects in a 2D image |
US20200294310A1 (en) * | 2019-03-16 | 2020-09-17 | Nvidia Corporation | Object Detection Using Skewed Polygons Suitable For Parking Space Detection |
US20210248812A1 (en) * | 2021-03-05 | 2021-08-12 | University Of Electronic Science And Technology Of China | Method for reconstructing a 3d object based on dynamic graph network |
US11126915B2 (en) * | 2018-10-15 | 2021-09-21 | Sony Corporation | Information processing apparatus and information processing method for volume data visualization |
US20210350560A1 (en) * | 2019-01-24 | 2021-11-11 | Imperial College Innovations Limited | Depth estimation |
US20220084234A1 (en) * | 2020-09-17 | 2022-03-17 | GIST(Gwangju Institute of Science and Technology) | Method and electronic device for identifying size of measurement target object |
-
2022
- 2022-08-17 CO CONC2022/0011603A patent/CO2022011603A1/en unknown
-
2023
- 2023-08-16 WO PCT/CO2023/000013 patent/WO2024037673A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10733755B2 (en) * | 2017-07-18 | 2020-08-04 | Qualcomm Incorporated | Learning geometric differentials for matching 3D models to objects in a 2D image |
US11126915B2 (en) * | 2018-10-15 | 2021-09-21 | Sony Corporation | Information processing apparatus and information processing method for volume data visualization |
US20210350560A1 (en) * | 2019-01-24 | 2021-11-11 | Imperial College Innovations Limited | Depth estimation |
US20200294310A1 (en) * | 2019-03-16 | 2020-09-17 | Nvidia Corporation | Object Detection Using Skewed Polygons Suitable For Parking Space Detection |
US20220084234A1 (en) * | 2020-09-17 | 2022-03-17 | GIST(Gwangju Institute of Science and Technology) | Method and electronic device for identifying size of measurement target object |
US20210248812A1 (en) * | 2021-03-05 | 2021-08-12 | University Of Electronic Science And Technology Of China | Method for reconstructing a 3d object based on dynamic graph network |
Also Published As
Publication number | Publication date |
---|---|
CO2022011603A1 (en) | 2024-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392965B (en) | Range finding method based on combination of deep learning and binocular stereo vision | |
US20060178828A1 (en) | Multidimensional evidence grids and system and methods for applying same | |
US11430087B2 (en) | Using maps comprising covariances in multi-resolution voxels | |
CA3076139C (en) | Neural network- instantiated lightweight calibration of rss fingerprint dataset | |
Xiao et al. | Monocular vehicle self-localization method based on compact semantic map | |
CN107305126A (en) | The data configuration of environmental map, its manufacturing system and preparation method and its more new system and update method | |
CA3054649A1 (en) | Weeding systems and methods, railway weeding vehicles | |
US11288861B2 (en) | Maps comprising covariances in multi-resolution voxels | |
CN113869629A (en) | Laser point cloud-based power transmission line safety risk analysis, judgment and evaluation method | |
CN114089330B (en) | Indoor mobile robot glass detection and map updating method based on depth image restoration | |
CN109903367B (en) | Method, apparatus and computer readable storage medium for constructing map | |
CA3078072A1 (en) | Maintaining a trained neural network for mobile device rss fingerprint based indoor navigation | |
CN112150448A (en) | Image processing method, device and equipment and storage medium | |
Lueck et al. | Who goes there? Using an agent-based simulation for tracking population movement | |
CN115515077A (en) | UAV-based dynamic generation method and system for WSN data acquisition track | |
Sezen et al. | Deep learning-based door and window detection from building façade | |
CN112699748B (en) | Human-vehicle distance estimation method based on YOLO and RGB image | |
WO2024037673A1 (en) | Artificial intelligence system for predicting object distance | |
Tripathi et al. | Occupancy grid mapping for mobile robot using sensor fusion | |
WO2021127692A1 (en) | Maps comprising covariances in multi-resolution voxels | |
Zhao et al. | Indoor lidar relocalization based on deep learning using a 3d model | |
Shareef et al. | Comparison of MLP neural network and kalman filter for localization in wireless sensor networks | |
CN115375761A (en) | Indoor positioning method and device based on target detection and visual SLAM | |
Skrzypczyński | Building Geometrical map of environment using IR range finder data | |
Cotra et al. | Lidar-based methods for tracking and identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23854568 Country of ref document: EP Kind code of ref document: A1 |