CN108764465B - Processing device for neural network operation - Google Patents

Processing device for neural network operation Download PDF

Info

Publication number
CN108764465B
CN108764465B CN201810486527.8A CN201810486527A CN108764465B CN 108764465 B CN108764465 B CN 108764465B CN 201810486527 A CN201810486527 A CN 201810486527A CN 108764465 B CN108764465 B CN 108764465B
Authority
CN
China
Prior art keywords
neural network
data
input
navigation
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810486527.8A
Other languages
Chinese (zh)
Other versions
CN108764465A (en
Inventor
吴凡迪
陈云霁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201810486527.8A priority Critical patent/CN108764465B/en
Publication of CN108764465A publication Critical patent/CN108764465A/en
Application granted granted Critical
Publication of CN108764465B publication Critical patent/CN108764465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Abstract

An intelligent piloting device, comprising: the target recognition device is used for processing the images shot during navigation, comparing the images with the sample library and recognizing the types of the obstacles in the images; and the information processing device is used for receiving the processed image and the recognized obstacle type, and outputting navigation selection data through neural network operation. The equipment can be applied to intelligent navigation, can save labor cost and simultaneously reduces risks in the navigation process of vehicles.

Description

Processing device for neural network operation
Technical Field
The disclosure relates to the technical field of information processing, in particular to intelligent navigation equipment.
Background
At present, in the field of driving or navigation, particularly in the field of navigation, under the conditions of dense sailing and complex water surface conditions, ships generally avoid sailing risks by performing laser markers on laser radars and traditional radars. And for tiny floaters, a pilot is mainly used for constantly observing the tiny floaters in the navigation process, so that obstacles on the water surface and underwater are avoided.
One of the prior art is a sonar underwater detection technology, which can refer to a sonar fish finder, but is only effective on underwater targets, and has great influence on the accuracy of detection in a complex water area; one is radar detection technology, such as traditional radar or laser radar, but the radar has a short-distance target detection blind area, and can not find out floaters existing on the surrounding water surface in time; the other is a vision system detection technology which mainly extracts the characteristics of the water surface target, but the traditional vision detection technology can only distinguish objects with obvious identification characteristics, such as island ships and the like, and cannot play a role in a complex water surface environment; and the other is manual identification, but the manual identification cost is too high, and the requirement on a pilot is high.
Disclosure of Invention
Technical problem to be solved
In view of the above, an object of the present disclosure is to provide an intelligent navigation apparatus to solve at least some technical problems.
(II) technical scheme
In order to achieve the above object, the present disclosure provides an intelligent piloting device, including: the target recognition device is used for processing the images shot during navigation, comparing the images with the sample library and recognizing the types of the obstacles in the images; and the information processing device is used for receiving the processed image and the recognized obstacle type, and outputting navigation selection data through neural network operation.
In a further aspect, an information processing apparatus includes: and the neural network chip is used for executing the neural network operation.
In a further aspect, the neural network chip includes: the storage unit is used for storing input data, neural network parameters and calculation instructions, wherein the input data comprises processed images and recognized obstacle types; the control unit is used for extracting the calculation instruction from the storage unit and analyzing the calculation instruction to obtain a plurality of operation instructions; and the operation unit is used for performing calculation on the input data according to the plurality of operation instructions to obtain navigation selection data.
In a further aspect, the memory unit includes: the input and output module is used for acquiring input data, neural network parameters and calculation instructions; the scalar data storage module is used for storing scalar data; and the storage medium is used for storing the data blocks and is on-chip storage.
In a further aspect, the neural network chip further includes: and the direct memory access DMA is used for storing the data, the neural network parameters and the calculation instructions in the storage unit so as to be called by the control unit and/or the arithmetic unit.
In a further aspect, the neural network chip further includes: and the instruction cache is used for accessing the DMA cache instruction from the direct memory for the control circuit to call, and is an on-chip cache.
In a further aspect, the neural network parameters include input neurons, output neurons, and weights, and the processor further includes: the input neuron cache is used for inputting neurons from the direct memory access DMA cache for being called by the operation unit; the weight cache is used for accessing the DMA cache weight from the direct memory for the calling of the arithmetic unit; the output neuron cache is used for storing the output neurons obtained from the operation unit after operation so as to output the output neurons to the direct memory access DMA; the input neuron cache module, the weight cache module and the output neuron cache module are all on-chip caches.
In a further aspect, the information processing apparatus further includes: an input data encoder for converting the processed image and the identified obstacle type into digital information that can be processed by the neural network.
In a further aspect, the information processing apparatus further includes: the memory is used for storing the navigation condition in the current time period and navigation selection data in different time periods; and the arithmetic unit is used for calculating the sailing risk and/or the current sailing loss according to the sailing selection queue data.
In a further aspect, the information processing apparatus further includes: data input/output terminal: the artificial neural network chip is used for receiving a signal input into the data encoder and transmitting the signal into the artificial neural network chip as an input; the navigation selection data is also used for receiving navigation selection data output by the output end of the neural network chip and storing the navigation selection data into the memory; and the navigation selection queue and the navigation condition in the current time period are read from the memory and input into the arithmetic unit.
In a further aspect, the information processing apparatus further includes: and the output data converter is used for recoding the sailing risk and/or the current sailing loss data calculated by the arithmetic unit and transmitting the recoded data to external equipment.
In a further aspect, the output data converter transmits the result of the operator calculation back to the target identification device, and updates the sample library.
In a further scheme, the neural network chip is further used for collecting an input-output information set after navigation selection is made by external equipment or a crew, adaptively updating the neural network parameters, and training to generate new neural network parameters.
In a further aspect, the object recognition apparatus includes: the image processing unit is used for preprocessing an input image, binarizing the input image and selecting a subarea to extract features; and the comparison unit is used for reading the extracted features and the samples in the sample library, and preliminarily identifying the types of the obstacle targets in the images.
(III) advantageous effects
(1) The equipment disclosed by the invention can be applied to intelligent navigation, so that the labor cost can be saved, and meanwhile, the risk in the navigation process of vehicles (such as ships) is reduced;
(2) the intelligent navigation equipment disclosed by the invention realizes navigation by using a special neural network chip, can better adapt to different water area conditions, and simultaneously solves the delay problem of the real-time running neural network (the operation speed of the neural network on common hardware is slower, so that the AI navigation algorithm is slower in operation speed, a ship cannot timely receive the feedback of a navigation state, and the real-time property is not realized).
Drawings
Fig. 1 is a block diagram of an information processing apparatus according to an embodiment of the present disclosure.
Fig. 2 is a block schematic diagram of the neural network chip of fig. 1.
Fig. 3 is a block schematic diagram of an intelligent piloting device of an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of an application scenario of the intelligent navigation system.
Fig. 5 is a schematic diagram of another application scenario of the intelligent navigation system.
Fig. 6 is an operation flowchart of an intelligent navigation method according to an embodiment of the present disclosure.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Hereinafter, examples will be provided to explain embodiments of the present disclosure in detail. The advantages and effects of the present disclosure will be more apparent from the disclosure of the present disclosure. The drawings attached hereto are simplified and serve as illustrations. The number, shape, and size of the components shown in the drawings may be modified depending on the actual situation, and the arrangement of the components may be more complicated. Other aspects of practice or use can be made in the present disclosure, and various changes and modifications can be made without departing from the spirit and scope defined in the present disclosure.
Fig. 1 is a block diagram of an information processing apparatus according to an embodiment of the present disclosure. According to an aspect of the embodiments of the present disclosure, an information processing apparatus 110 is provided for receiving unprocessed or processed images and/or identified obstacle types and outputting navigation selection data through neural network operations. Wherein the image can be from a video frame or image captured by the surroundings of the vehicle (land or sea) while underway, captured while underway; which may be pre-processed image data containing extracted features or unprocessed image data. And performing neural network operation analysis on the image and the type of the obstacle in the image to determine the type of the obstacle and further determine navigation selection, such as whether to avoid the obstacle and the mode to avoid the obstacle. Through the information processing device, whether the obstacle exists in the high-efficiency analysis image or not can be analyzed, the accurate type of the obstacle is determined, the navigation selection is given, the labor cost can be saved, and meanwhile, the risk of the vehicle in the navigation process is reduced.
The information processing device 110 may be trained according to input data, history image information in a period of time, and navigation information selected under the history image conditions, obtain a loss function through an operation result of an LSTM model initialization parameter and related input data, calculate a navigation selection with a minimum cost under the condition, and then return to the outside and present an optimal navigation selection judged by intelligent navigation. Meanwhile, after sailing selection is made by a crew, an input-output information set is collected, parameters (such as weight, bias and the like) in a neural network (such as an LSTM long-short term memory network) are updated in a self-adaptive mode, and new neural network parameters are generated and trained.
In some embodiments, the information processing apparatus 110 includes: a neural network chip 111 for performing neural network operations. By adopting the special neural network chip, the speed of the neural network operation can be accelerated so as to provide a timely navigation strategy.
In some embodiments, the navigation selection data provided by the information processing device 110 includes, but is not limited to: whether there is an obstacle, the type of obstacle, the weather conditions, whether avoidance should be avoided, the direction of travel, the speed of travel, and/or the degree of emergency.
FIG. 2 is a block diagram of a neural network chip according to one embodiment of FIG. 1. As shown in fig. 2, in some embodiments, the neural network chip 200 includes a storage unit 201, a control unit 202, and an operation unit 203, where the storage unit 201 is used to store input data (which may be used as input neurons), neural network parameters, and instructions; the control unit 202 is used for reading the special instruction from the storage unit, decoding the special instruction into an arithmetic unit instruction and inputting the arithmetic unit instruction to the arithmetic unit 203; the arithmetic unit 203 is configured to perform a corresponding neural network operation on the data according to the arithmetic unit instruction, so as to obtain an output neuron. The storage unit 201 may further store output neurons obtained by the operation of the operation unit. Neural network parameters herein include, but are not limited to, weights, biases, and activation functions.
The storage unit 201 stores data, wherein the input data may be unprocessed or processed images and recognized obstacle types transmitted from the outside; the neural network may be pre-stored or post-stored, including but not limited to CNN (convolutional neural network), RNN (cyclic neural network) or DNN (deep neural network), preferably, using LSTM long-short term memory network model in RNN (cyclic neural network); the calculation instruction may be pre-stored in the storage unit 201 or stored from the outside at a later time, and will be further specifically described below.
In some embodiments, the storage unit 201 includes a scalar data storage unit for storing scalar data such as temperature and humidity equivalent data in input data, neural network parameters, and instructions, the scalar data storage module acting to increase the speed of data handling; the storage unit 201 may further include an input/output module, configured to obtain input data, neural network parameters, and a calculation instruction (the calculation instruction may be compiled by the control unit to obtain an operation instruction, and the operation unit is controlled to perform a corresponding neural network operation).
In some embodiments, the storage unit 201 may also include a storage medium. The storage medium may be an off-chip memory, or in practical applications, may be an on-chip memory for storing a data block, where the data block may specifically be n-dimensional data, n is an integer greater than or equal to 1, for example, when n is equal to 1, 1-dimensional data, i.e., a vector, and when n is equal to 2, 2-dimensional data, i.e., a matrix, and when n is equal to 3 or more, a multidimensional tensor is provided.
In some embodiments, the performing the corresponding neural network operation in the operation unit 203 includes: multiplying the input neuron by the weight data to obtain a multiplication result; executing addition tree operation for adding the multiplication results step by step through an addition tree to obtain a weighted sum, and adding bias or not processing the weighted sum; and executing activation function operation on the weighted sum which is biased or not processed to obtain the output neuron. Preferably, the activation function may be a sigmoid function, a tanh function, a ReLU function, or a softmax function.
In some embodiments, as shown in fig. 2, the neural network chip 200 may further include a DMA204(Direct Memory Access) for storing input data, neural network parameters and instructions in the storage unit, so as to be called by the control unit 202 and the operation unit 203; and further, after the arithmetic unit 203 calculates the output neuron, it writes the output neuron into the storage unit 201.
In some embodiments, as shown in fig. 2, the neural network chip 200 further includes an instruction cache 205 for accessing DMA cache instructions from the direct memory for the control unit to call. The instruction cache 205 may be an on-chip cache, which is integrated on a neural network chip through a manufacturing process, and may improve a processing speed and save an overall operation time when an instruction is fetched.
In some embodiments, the neural network chip 200 further includes: an input neuron cache 206 for inputting neurons from the DMA cache for invocation by the arithmetic unit 203; a weight cache 207 for caching weights from the DMA204 for the arithmetic unit 203 to call; an output neuron buffer 208 for storing the output neurons obtained from the operation unit 203 for output to the DMA 204. The input neuron buffer 206, the weight buffer 207, and the output neuron buffer 208 may also be on-chip buffers, and are integrated on the neural network chip 200 by a semiconductor process, so that the processing speed can be increased when the operation unit 203 reads and writes, and the overall operation time can be saved.
In some embodiments, the information processing apparatus 110 further includes an input data encoder 112 in addition to the neural network chip 111, for converting the processed image and the recognized obstacle type into digital information that can be processed by the neural network. The digital information which can be processed can be used as input neurons in the neural network to participate in the operation of the neural network.
In some examples, the information processing apparatus 110 may further include: a memory 114 and an operator 115, wherein the memory 114 is used for storing navigation conditions in the current time period and navigation selection data of different time periods; the operator 115 is used to calculate the risk of voyage (estimated from various information such as sea surface information and weather information) and/or the loss of current voyage (including consumed fuel, time spent, etc.) based on the voyage selection queue data.
In some embodiments, the information processing apparatus 110 further comprises a data input/output terminal 113, wherein the input/output terminal 113 is used for receiving the signal of the input data encoder 112 and transmitting the signal into the neural network chip 111 as an input; the navigation selection data is also used for receiving navigation selection data output by the output end of the neural network chip 111 and is stored in the memory 114; and a processor 115 for reading the voyage selection queue and the voyage status in the current time period from the memory 114 and inputting them into the arithmetic unit.
In some embodiments, the information processing apparatus 110 further includes an output data converter 116, and the output data converter 116 is configured to re-encode the risk of voyage and/or the loss data of the current voyage calculated by the arithmetic unit 115 and transmit the re-encoded data to an external device.
In some embodiments, the output data converter 116 returns the result of the operator calculation to the outside, as well as updates the sample library; and/or transmitting the obtained result back to the outside, thereby updating the external sample library.
In some embodiments, the neural network chip 111 is further configured to collect input-output information sets after the navigation selection is made by an external device or a crew, adaptively update the neural network parameters, and train to generate new neural network parameters. Preferably, the training process is processed in real time.
Fig. 3 is a block schematic diagram of an intelligent piloting device of an embodiment of the present disclosure. According to another aspect of the embodiments of the present disclosure, an intelligent navigation device 100 is provided, which includes an object recognition device 120 in addition to the information processing device 110 of the above-mentioned embodiments. The target recognition device 120 inputs a video frame or a picture captured by a camera device (including but not limited to a camera, a mobile phone, and a digital camera) as an input image, and then may perform a series of pre-processing processes (image defogging, graying, image stabilization, smoothing, segmentation, etc.), binarizes, selects a sub-region to extract features (these features include texture features, invariant moment features, geometric features, etc.), after extracting the following features, compares the features with a navigation water surface sample library to obtain an obstacle type preliminarily, and transmits the features and the preliminarily determined obstacle type as further input to the information processing device 110. The intelligent navigation device 100 of the embodiment of the present disclosure uses a special neural network chip to achieve navigation, can better adapt to different water area conditions, and simultaneously solves the delay problem of a real-time running neural network.
The intelligent navigation is realized by the two devices of the target recognition device 120 and the information processing device 110. The target recognition device processes the images, compares the images with the images in the sample library and recognizes the types of the obstacles, and the information processing device receives the images transmitted by the target recognition device and returns the images to the outside to the navigation scheme of the ship in the current state, so that intelligent navigation is realized.
In the interaction of the two, the storage unit 20 of the information processing device 110 can interact with the target recognition device 120 in real time, receive the feature and obstacle type information transmitted by the target recognition device 120, train the device according to the input data and the historical image information in a period of time and the selected navigation information under the historical image condition, initialize parameters and operation results of relevant input data through a neural network (such as LSTM) model, obtain a loss function, calculate the navigation selection with the minimum cost under the condition, and then return to the outside and present the optimal navigation selection judged by intelligent navigation. Meanwhile, after sailing selection is made by a crew, an input-output information set is collected, parameters (such as weight, bias and the like) in a neural network (such as an LSTM long-short term memory network) are updated in a self-adaptive mode, and new neural network parameters are generated and trained.
For the internal configuration of the information processing apparatus 110, the configuration of the above embodiment is referred to, and is not described herein again.
In some embodiments, for the target recognition device 120, in the sub-region feature extraction process, the texture features are extracted by using a gray level co-occurrence matrix, where the texture feature description factors include four; the invariant moment features may include 6 Hu invariant moment features and 3 radial invariant moment features (a total of 9 features); the geometric features may include area features, slenderness features, compactness features, and convex hull features.
In some embodiments, the object recognition device 120 includes an image processing unit for performing an image processing unit on the input image for preprocessing the input image, binarizing, and selecting a sub-region extraction feature; and the comparison unit is used for reading the extracted features and the samples in the sample library, and preliminarily identifying the types of the obstacle targets in the images.
In some embodiments, in the image preprocessing process, the image defogging adopts a single-scale Retinex algorithm based on edge detection, and estimates the brightness component by using gaussian filtering based on edge information, so that a better defogging effect is achieved. And the electronic image stabilization adopts a Scale Invariant Feature Transform (SIFT) algorithm to extract feature points, combines a radiation model and Kalman filtering to obtain compensation parameters, and adopts an adjacent frame compensation method to compensate each frame of image.
In some embodiments, the sources of the sample library are live image data, web picture data, and/or various 3D models made by 3DMAX software.
In some embodiments, the obstacles in the sample library include at least one of: ships, large marine life, floats, reefs, glaciers, islands, farms, fishing nets, and the like.
In some embodiments, the cost of the voyage selection of the loss function may be defined by the damage that the ship may cause (measured quantitatively by the price of ship maintenance), the impact on the environmental ecology, the delay of voyage time, and the amount of oil consumed by the ship, and after the above criteria are measured quantitatively, the above criteria are weighted to obtain the loss function of the voyage selection.
The embodiment of the present disclosure further provides an intelligent piloting method, including: processing the images shot during navigation through a target recognition device, comparing the images with a sample library, and recognizing the types of obstacles in the images; and receiving the processed image and the recognized barrier type through the information processing device, and outputting navigation selection data through neural network operation.
Fig. 6 is an operation flowchart of an intelligent navigation method according to an embodiment of the present disclosure. The whole process of the manual instruction pilot device in the embodiment of the disclosure can be as follows:
step S1, the target recognition device 120 processes the image, compares the processed image with the images in the sample library, recognizes the type of the obstacle, and forms input data;
step S2, the input data is encoded by the input data encoder 112 and then stored in the storage unit 201 through the input/output terminal 113 or directly transmitted to the storage unit 201;
step S3, the DMA204(Direct Memory Access) buffers the instruction, the input neuron (including the data conforming to the input format of the neural network) and the weight value stored in the storage unit 201, and transmits the buffered instruction, input neuron buffer and weight value buffer in batches, respectively;
step S4, the control unit reads the instruction from the instruction cache, decodes it into the instruction of the arithmetic unit, and then transmits it to the arithmetic unit;
in step S5, according to the arithmetic unit instruction, the arithmetic unit executes the corresponding operation: in each layer of the neural network, the operation can be divided into three steps: s5.1, multiplying the corresponding input neuron by the weight; step S5.2, performing addition tree operation, namely adding the results of the step S5.1 step by step through an addition tree to obtain a weighted sum, and adding bias or not processing the weighted sum according to needs; and S5.3, performing activation function operation on the result obtained in the step S5.2 to obtain an output neuron, and transmitting the output neuron into an output neuron cache.
In step S6, the output signal converter 116 returns the obtained result to the target identification device 120, so as to update the sailing water surface sample library, and the obtained digital signal is re-encoded and presented to the crew member through the display device.
The specific functional modules/devices/units, etc. referred to in the steps may be arranged with reference to the corresponding functional elements in the above-described embodiments.
Fig. 4 is a schematic diagram of an application scenario of the intelligent navigation system. As shown in fig. 4, an embodiment of the present disclosure provides an intelligent navigation system, which can be applied to intelligent navigation at sea, and a ship may be equipped with the intelligent navigation system, which includes an image capturing device 410 (which may be a camera or a video camera) for capturing an image ahead of a ship when the ship is underway; the target recognition device is used for processing the images shot during navigation, comparing the images with the sample library and recognizing the types of the obstacles in the images; the information processing device is used for receiving the processed image and the recognized barrier type, and outputting navigation selection data through neural network operation; and the display device is used for displaying the navigation selection data output by the information processing device to a user. For the setting of the above devices, reference may be made to the corresponding devices in the intelligent navigation device in the above embodiments, which are not described herein again. It should be noted that the target device apparatus, the information processing apparatus, and the display apparatus may be different bodies or may be integrated, for example, a computer 420 shown in fig. 4. When the image capturing device 410 captures the obstacle 430, it can calculate and display the obstacle to a user (such as a crew member) through the computer 420 shown in fig. 4 to assist the navigation operation.
Similarly, fig. 5 is a schematic diagram of another application scenario of the intelligent navigation system. Slightly different, the intelligent navigation system in fig. 5 is applied to land navigation, the image capturing device 510 in fig. 5 may be installed on a front windshield of an automobile (for example, a camera is installed in the windshield), and the information processing device, the display device and the target device may be different bodies or may be integrated into a whole, for example, a host 520 (which may be integrated into an operating system of the automobile) shown in fig. 4, and when the image capturing device 510 captures an obstacle 530, the driver may be calculated and prompted by the host 520 shown in fig. 5 to assist driving.
In the embodiments provided in the present disclosure, it should be understood that the disclosed related devices and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the described parts or modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of parts or modules may be combined or integrated into a system, or some features may be omitted or not executed.
In this disclosure, the term "and/or" may have been used. As used herein, the term "and/or" means one or the other or both (e.g., a and/or B means a or B or both a and B).
In the description above, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. The specific embodiments described are not intended to limit the disclosure but rather to illustrate it. The scope of the present disclosure is not to be determined by the specific examples provided above but only by the claims below. In other instances, well-known circuits, structures, devices, and operations are shown in block diagram form, rather than in detail, in order not to obscure an understanding of the description. Where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, optionally having similar characteristics or identical features, unless otherwise specified or evident.
Various operations and methods have been described. Some methods have been described in a relatively basic manner in a flow chart form, but operations may alternatively be added to and/or removed from the methods. Additionally, while the flow diagrams illustrate a particular order of operation according to example embodiments, it is understood that this particular order is exemplary. Alternative embodiments may optionally perform these operations in a different manner, combine certain operations, interleave certain operations, etc. The components, features, and specific optional details of the devices described herein may also optionally be applied to the methods described herein, which may be performed by and/or within such devices in various embodiments.
Each functional unit/subunit/module/submodule in the present disclosure may be hardware, for example, the hardware may be a circuit, including a digital circuit, an analog circuit, and the like. Physical implementations of hardware structures include, but are not limited to, physical devices including, but not limited to, transistors, memristors, and the like. The computing modules in the computing device may be any suitable hardware neural network chips, such as CPUs, GPUs, FPGAs, DSPs, ASICs, and the like. The memory unit may be any suitable magnetic or magneto-optical storage medium, such as RRAM, DRAM, SRAM, EDRAM, HBM, HMC, etc.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (6)

1. An intelligent piloting device, characterized in that it comprises:
the target recognition device is used for processing the images shot during navigation, comparing the images with the sample library and recognizing the types of the obstacles in the images;
the information processing device is used for receiving the processed image and the recognized barrier type, and outputting navigation selection data through neural network operation;
the information processing apparatus includes: the neural network chip is used for executing the neural network operation;
the neural network chip is also used for collecting an input-output information set after navigation selection is made by external equipment or a crew, adaptively updating neural network parameters, and training to generate new neural network parameters;
the information processing apparatus includes: an input data encoder for converting the processed image and the identified obstacle type into digital information that can be processed by a neural network;
the memory is used for storing the navigation condition in the current time period and navigation selection data in different time periods;
the arithmetic unit is used for calculating the sailing risk and/or the current sailing loss according to the sailing selection queue data;
the neural network chip includes:
the storage unit is used for storing input data, neural network parameters and calculation instructions, wherein the input data comprises processed images and recognized obstacle types;
the control unit is used for extracting the calculation instruction from the storage unit and analyzing the calculation instruction to obtain a plurality of operation instructions;
the operation unit is used for performing calculation on the input data according to the plurality of operation instructions to obtain navigation selection data;
the information processing apparatus further includes:
data input/output terminal: the artificial neural network chip is used for receiving a signal input into the data encoder and transmitting the signal into the artificial neural network chip as an input; the navigation selection data is also used for receiving navigation selection data output by the output end of the neural network chip and storing the navigation selection data into the memory; the navigation selection queue and the navigation condition in the current time period are read from the memory and input to the arithmetic unit;
the output data converter is used for recoding the sailing risk calculated by the arithmetic unit and/or the loss data of the current sailing and then transmitting the recoded data to external equipment;
the output data converter transmits the result calculated by the arithmetic unit back to the target identification device and updates the sample base.
2. The intelligent piloting device of claim 1, wherein the storage unit comprises:
the input and output module is used for acquiring input data, neural network parameters and calculation instructions;
the scalar data storage module is used for storing scalar data;
and the storage medium is used for storing the data blocks and is on-chip storage.
3. The intelligent piloting device of claim 1, wherein the neural network chip further comprises:
and the direct memory access DMA is used for storing the data, the neural network parameters and the calculation instructions in the storage unit so as to be called by the control unit and/or the arithmetic unit.
4. The intelligent piloting device of claim 3, wherein the neural network chip further comprises:
and the instruction cache is used for accessing the DMA cache instruction from the direct memory for the control circuit to call, and is an on-chip cache.
5. The intelligent piloting device of claim 3, wherein the neural network parameters comprise input neurons, output neurons, and weights, the processor further comprising:
the input neuron cache is used for inputting neurons from the direct memory access DMA cache for being called by the operation unit;
the weight cache is used for accessing the DMA cache weight from the direct memory for the calling of the arithmetic unit;
the output neuron cache is used for storing the output neurons obtained from the operation unit after operation so as to output the output neurons to the direct memory access DMA;
the input neuron cache module, the weight cache module and the output neuron cache module are all on-chip caches.
6. The intelligent piloting device of claim 1, wherein the target recognition means comprises:
the image processing unit is used for preprocessing an input image, binarizing the input image and selecting a subarea to extract features;
and the comparison unit is used for comparing the extracted features with the samples in the sample library and preliminarily identifying the types of the obstacle targets in the images.
CN201810486527.8A 2018-05-18 2018-05-18 Processing device for neural network operation Active CN108764465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810486527.8A CN108764465B (en) 2018-05-18 2018-05-18 Processing device for neural network operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810486527.8A CN108764465B (en) 2018-05-18 2018-05-18 Processing device for neural network operation

Publications (2)

Publication Number Publication Date
CN108764465A CN108764465A (en) 2018-11-06
CN108764465B true CN108764465B (en) 2021-09-24

Family

ID=64007269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810486527.8A Active CN108764465B (en) 2018-05-18 2018-05-18 Processing device for neural network operation

Country Status (1)

Country Link
CN (1) CN108764465B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597756B (en) * 2019-08-26 2023-07-25 光子算数(北京)科技有限责任公司 Calculation circuit and data operation method
CN115600659A (en) * 2021-07-08 2023-01-13 北京嘉楠捷思信息技术有限公司(Cn) Hardware acceleration device and acceleration method for neural network operation
CN116395105B (en) * 2023-06-06 2023-10-20 海云联科技(苏州)有限公司 Automatic lifting compensation method and system for unmanned ship

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512723A (en) * 2016-01-20 2016-04-20 南京艾溪信息科技有限公司 Artificial neural network calculating device and method for sparse connection
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems
CN107092252A (en) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 A kind of robot automatic obstacle avoidance method and its device based on machine vision
US9760806B1 (en) * 2016-05-11 2017-09-12 TCL Research America Inc. Method and system for vision-centric deep-learning-based road situation analysis
CN107886099A (en) * 2017-11-09 2018-04-06 电子科技大学 Synergetic neural network and its construction method and aircraft automatic obstacle avoiding method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9792546B2 (en) * 2013-06-14 2017-10-17 Brain Corporation Hierarchical robotic controller apparatus and methods
US9767565B2 (en) * 2015-08-26 2017-09-19 Digitalglobe, Inc. Synthesizing training data for broad area geospatial object detection
CN107563332A (en) * 2017-09-05 2018-01-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512723A (en) * 2016-01-20 2016-04-20 南京艾溪信息科技有限公司 Artificial neural network calculating device and method for sparse connection
US9760806B1 (en) * 2016-05-11 2017-09-12 TCL Research America Inc. Method and system for vision-centric deep-learning-based road situation analysis
CN106919915A (en) * 2017-02-22 2017-07-04 武汉极目智能技术有限公司 Map road mark and road quality harvester and method based on ADAS systems
CN107092252A (en) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 A kind of robot automatic obstacle avoidance method and its device based on machine vision
CN107886099A (en) * 2017-11-09 2018-04-06 电子科技大学 Synergetic neural network and its construction method and aircraft automatic obstacle avoiding method

Also Published As

Publication number Publication date
CN108764465A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108764470B (en) Processing method for artificial neural network operation
CN111507271B (en) Airborne photoelectric video target intelligent detection and identification method
CN110472483B (en) SAR image-oriented small sample semantic feature enhancement method and device
CN109478239B (en) Method for detecting object in image and object detection system
CN110084234B (en) Sonar image target identification method based on example segmentation
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
Cheng et al. Robust small object detection on the water surface through fusion of camera and millimeter wave radar
Wang et al. Real-time underwater onboard vision sensing system for robotic gripping
CN108764465B (en) Processing device for neural network operation
CN113614730B (en) CNN classification of multi-frame semantic signals
CN113592896B (en) Fish feeding method, system, equipment and storage medium based on image processing
CN109753878A (en) Imaging recognition methods and system under a kind of bad weather
US20220277581A1 (en) Hand pose estimation method, device and storage medium
CN109871792B (en) Pedestrian detection method and device
CN108647781B (en) Artificial intelligence chip processing apparatus
Wang et al. Deep learning-based raindrop quantity detection for real-time vehicle-safety application
Zhu et al. YOLOv7-CSAW for maritime target detection
Jiang et al. Improve object detection by data enhancement based on generative adversarial nets
Kuan et al. Pothole detection and avoidance via deep learning on edge devices
Shabarinath et al. Convolutional neural network based traffic-sign classifier optimized for edge inference
Dong et al. ShipGAN: Generative Adversarial Network based simulation-to-real image translation for ships
CN112949380B (en) Intelligent underwater target identification system based on laser radar point cloud data
CN114495050A (en) Multitask integrated detection method for automatic driving forward vision detection
CN110895680A (en) Unmanned ship water surface target detection method based on regional suggestion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant