CN111832356A - Information processing apparatus, method and related product - Google Patents

Information processing apparatus, method and related product Download PDF

Info

Publication number
CN111832356A
CN111832356A CN201910316954.6A CN201910316954A CN111832356A CN 111832356 A CN111832356 A CN 111832356A CN 201910316954 A CN201910316954 A CN 201910316954A CN 111832356 A CN111832356 A CN 111832356A
Authority
CN
China
Prior art keywords
scene
unit
data
processing circuit
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910316954.6A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Original Assignee
Cambricon Technologies Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambricon Technologies Corp Ltd filed Critical Cambricon Technologies Corp Ltd
Priority to CN201910316954.6A priority Critical patent/CN111832356A/en
Publication of CN111832356A publication Critical patent/CN111832356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7803System on board, i.e. computer system on one or more PCB, e.g. motherboards, daughterboards or blades
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Neurology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an information processing device, a method and a related product, wherein the related product comprises a neural network chip and a board card, the board card comprises a storage device, an interface device, a control device and the neural network chip, and the neural network chip is respectively connected with the storage device, the control device and the interface device; the storage device is used for storing data; the interface device is used for realizing data transmission between the neural network chip and external equipment; the control device is used for monitoring the state of the neural network chip.

Description

Information processing apparatus, method and related product
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing apparatus and method, and a related product.
Background
The blind people are social disadvantaged groups, and at present, tools for assisting the blind people to go out in daily life mainly depend on a blind guiding dog and a blind guiding stick. The blind guiding dog has high training cost, limited service life and no universality. The blind guiding stick consists of a support rod and a handle and is used for supporting the blind to walk. Although the blind can find the way by the blind guiding stick, the blind can find the way only by the blind guiding stick or the blind guiding dog for the navigation aspect with known walking direction and walking route, but the blind cannot find the way only by the blind guiding stick or the blind guiding dog for the navigation aspect with unknown walking direction and walking route, and even if the blind asks the way, the blind cannot obtain correct guidance due to the visual obstruction. Therefore, in the prior art, the traveling safety of the blind is low, and the accuracy of the navigation scheme for guiding the blind to move forward is low.
Disclosure of Invention
The embodiment of the application provides a blind person assisting device, a blind person assisting method and a related product, which aim to identify objects in a scene, provide an optimal navigation route for the blind person in real time and facilitate the travel of the blind person.
In a first aspect, an embodiment of the present application provides a blind person assisting device, where the blind person assisting device includes a processing unit and a path planning unit, and the processing unit includes an arithmetic unit and a control unit;
the control unit is used for reading a first calculation instruction and sending the first calculation instruction to the arithmetic unit;
the operation unit is used for obtaining target scene data and a weight, and carrying out corresponding operation according to the target scene data, the weight and the first calculation instruction to obtain a scene identification result;
and the path planning unit is used for planning a path according to the scene recognition result to obtain a target path and executing preset prompt operation on the target path.
In a second aspect, the present application provides a machine learning arithmetic device, which includes one or more blind person assistance devices described in the first aspect. The machine learning arithmetic device is used for acquiring data to be operated and control information from other processing devices, executing specified machine learning arithmetic and transmitting an execution result to other processing devices through an I/O interface;
when the machine learning arithmetic device comprises a plurality of blind auxiliary devices, the plurality of computing devices can be linked through a specific structure and transmit data;
the plurality of blind auxiliary devices are interconnected through a PCIE bus and transmit data so as to support larger-scale machine learning operation; the blind auxiliary devices share the same control system or own respective control systems; the blind auxiliary devices share a memory or own memories; the plurality of blind person assisting devices are connected in any connection topology.
In a third aspect, the present application provides a combined processing device, which includes the machine learning processing device according to the second aspect, a universal interconnection interface, and other processing devices. The machine learning arithmetic device interacts with the other processing devices to jointly complete the operation designated by the user. The combined processing device may further include a storage device, which is connected to the machine learning arithmetic device and the other processing device, respectively, and stores data of the machine learning arithmetic device and the other processing device.
In a fourth aspect, an embodiment of the present application provides a neural network chip, where the neural network chip includes the blind assisting device according to the first aspect, the machine learning arithmetic device according to the second aspect, or the combined processing device according to the third aspect.
In a fifth aspect, an embodiment of the present application provides a neural network chip package structure, where the neural network chip package structure includes the neural network chip described in the fourth aspect;
in a sixth aspect, an embodiment of the present application provides a board card, where the board card includes the neural network chip package structure described in the fifth aspect.
In a seventh aspect, an embodiment of the present application provides an electronic device, where the electronic device includes the neural network chip described in the sixth aspect or the board described in the sixth aspect.
In an eighth aspect, an embodiment of the present application provides a blind person assisting method, where the method is applied to a blind person assisting device, the blind person assisting device includes a processing unit and a path planning unit, and the processing unit includes an arithmetic unit and a control unit; the method comprises the following steps:
the control unit reads a first calculation instruction and sends the first calculation instruction to the arithmetic unit;
the operation unit obtains target scene data and a weight, and performs corresponding operation according to the target scene data, the weight and the first calculation instruction to obtain a scene identification result;
and the path planning unit carries out path planning according to the scene recognition result to obtain a target path and executes preset prompt operation on the target path.
In some embodiments, the electronic device comprises a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, a headset, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
The embodiment of the application has the following beneficial effects:
it can be seen that in the embodiment of the application, the blind person auxiliary device is used for carrying out real-time scene recognition on the environment where the blind person is located to obtain the recognition result, planning the path according to the recognition result, and carrying out prompt operation on the obtained target path to ensure that the blind person avoids the barrier when the blind person moves, so that the convenience of the blind person in traveling is improved, and the safe traveling of the blind person is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a blind person assisting device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a path planning unit according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a blind assisting method according to an embodiment of the present application;
fig. 4 is a block diagram of a combined processing device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of another combined processing device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a board card provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a blind assisting device provided in an embodiment of the present application, the blind assisting device includes a processing unit 10 and a path planning unit 20, the processing unit 10 includes an arithmetic unit 100 and a control unit 200, where:
the control unit 200 is configured to read a first calculation instruction, and send the first calculation instruction to the arithmetic unit 100, where the first calculation instruction is a calculation instruction related to object identification in a scene;
the operation unit 100 is configured to obtain target scene data and a weight, and perform corresponding operation according to the target scene data, the weight, and the first calculation instruction to obtain a scene recognition result, where obtaining the target scene data includes: the arithmetic unit 100 actively acquires target scene data from the storage unit or passively receives the target scene data sent by the storage unit;
and the path planning unit 20 is configured to perform path planning according to the scene recognition result to obtain a target path, and perform a preset prompt operation on the target path.
Optionally, the step of performing a preset prompt operation on the target path may be: broadcasting the target path through voice; or, indicating the target path by vibration; or the target path is transmitted to the blind user and the like through brain wave stimulation, and the prompting operation is not limited uniquely.
In a possible example, as shown in fig. 2, the path planning unit 20 includes a planning unit 201 and a speech unit 202, and the planning unit is configured to: performing path planning according to the scene recognition result to obtain a target path, and sending the target path to the voice unit 202; and the voice unit 202 is used for voice broadcasting the received target path.
It can be seen that, in the embodiment of the application, the operation unit performs scene recognition on the obtained target scene data to obtain a scene recognition result, the path planning unit determines a target path according to the scene recognition result, and performs a preset prompt operation on the target path to instruct the blind user to advance according to the target path, so that an obstacle in the process of advancing is avoided, and the safety and the convenience of the blind user in traveling are improved.
In a possible example, the arithmetic unit 10 comprises a master processing circuit 101 and at least one slave processing circuit 102, wherein:
the control unit 200 is further configured to analyze the first computation instruction to obtain a plurality of first sub-computation instructions, and send the plurality of first sub-computation instructions to the main processing circuit 101;
a main processing circuit 101, configured to perform preamble processing on the target scene data, compose the target scene data into input data, split the input data into a plurality of input data blocks, and distribute the plurality of input data blocks and the plurality of first sub-computation instructions to a slave processing circuit 102, where the preamble processing includes data type conversion processing, thinning processing, or data padding processing, and the data type conversion processing includes: fixed point data to floating point data or floating point data to fixed point data, etc.
The slave processing circuit 102 is configured to execute intermediate operations in parallel according to the received input data blocks and the first sub-computation instructions to obtain intermediate results, and transmit the intermediate results to the master processing circuit 101;
a main processing circuit 101, configured to perform subsequent processing on the received multiple intermediate results to obtain an object class corresponding to the target scene data, and use the object class as the scene identification result, where the subsequent processing includes activation processing, normalization processing, pooling processing, or biasing processing, and the like, and the activation processing includes: sigmoid, tanh, relu, softmax or linear activation.
The splitting of the input data is described below by an actual example, and the output result is the same as the input data due to the same data type, and the splitting is basically the same, assuming that the data type is a matrix, the matrix is H × W, if the value of H is small (smaller than a set threshold, for example, 100), the splitting may be performed by splitting the matrix H × W into H vectors (each vector is a row of the matrix H × W), each vector is an input data block, and marking the position of the first element of the input data block, that is, marking the position as the input data block (H, W), where H and W are the values of the first element of the input data block H, W in the H direction and the W direction, for example, the first input data block, H is 1, and W is 1. And after receiving the input data block (h, w) from the processing circuit, multiplying and accumulating the input data block (h, w) and each column element of the weight in a one-to-one correspondence manner to obtain an input intermediate result (w, i), wherein the w of the intermediate result is the w value of the input data block, the i is the column number value of the column element calculated by the input data block, and the main processing circuit determines the positions of the output result of the intermediate result in the hidden layer to be w and i. For example, the input data block (1,1) and the input intermediate result (1,1) obtained by the calculation of the first column of the weight value are input, and the main processing circuit arranges the input intermediate result (1,1) in the first row and the first column of the hidden layer output result.
In a possible example, as shown in fig. 1, the blind person assisting device further includes an image obtaining unit 30, in which:
an image obtaining unit 30, configured to obtain a scene image, and obtain original scene data corresponding to the scene image;
the control unit 200 is further configured to obtain a second calculation instruction, and send the second calculation instruction to the arithmetic unit, where the second calculation instruction is a calculation instruction related to a target detection algorithm;
the operation unit 100 is further configured to obtain the original scene data and the weight, perform corresponding operation according to the original scene data, the weight, and a second calculation instruction to obtain an output result, determine a candidate region of the scene image according to the output result, mark the original scene data corresponding to the scene image included in the candidate region as the target scene data, where the candidate region includes at least one object.
Of course, after receiving the second calculation instruction, the control unit needs to analyze the second calculation instruction to obtain a plurality of second sub-calculation instructions, and send the split plurality of second sub-calculation instructions to the operation unit 100, that is, to execute a process similar to the process of analyzing the first calculation instruction, which is not described again.
Optionally, the image obtaining unit 30 may be a camera, and if the camera may be a depth camera, obtaining the scene image may be: shooting through the camera to obtain the scene image; or shooting a video through the camera to obtain a scene video, and extracting a plurality of frames of images from the scene video according to a preset sampling rate to obtain a scene image. For example, a scene image may be obtained by sampling one frame image from the scene video every 10 frames, 15 frames, 20 frames, or other values.
In a possible example, when performing path planning according to the scene recognition result to obtain a target path, the path planning unit 200 is specifically configured to: planning a plurality of traveling routes according to the scene recognition result; determining a category of objects on each of the number of travel routes; determining the score of each traveling route according to the object type, wherein the object type and the score have a preset mapping relation; and taking the travel route with the highest score as a target path.
In the above possible example, when the image obtaining unit 30 is a depth camera, the planning of the plurality of travel routes according to the scene recognition result may be: acquiring the depth of field information of the scene image through the depth of field camera; determining the distance between the object corresponding to the scene recognition result and the depth-of-field camera according to the depth-of-field information; and constructing a scene map according to the distance, and planning a plurality of traveling routes from the scene map.
In a possible example, as shown in fig. 1, the processing unit further includes an instruction cache 300, a weight cache 400, an input neuron cache 500, and an output neuron cache 600;
an instruction cache 300, configured to cache a computation instruction required in the operation, and send the computation instruction to the operation unit 100 through the control unit 200, where the computation instruction includes the first computation instruction and the second computation instruction, that is, when performing scene recognition to obtain a scene recognition result, the first computation instruction is cached, and when determining target scene data from original scene data, the second computation instruction is cached;
the input neuron cache 600 is configured to cache scene data, and send the cached scene data to the arithmetic unit 100, where the scene data includes original scene data and target scene data in the original scene data, that is, when a scene recognition result is obtained in the scene recognition, the target scene data is cached, and when the target scene data is determined from the original scene data, the original scene data is cached;
a weight cache 500, configured to cache a weight required in the calculation, and send the cached weight to the arithmetic unit 100, where the weight includes a first weight related to the first arithmetic instruction and a second weight related to the second arithmetic instruction, that is, when performing scene recognition to obtain a scene recognition result, the first weight is cached, and when determining target scene data from original scene data, the second weight is cached;
an output neuron cache 700, configured to cache the scene identification result sent by the operation unit 100, and send the cached scene identification result to the path planning unit 20; specifically, the path planning unit 20 reads the scene recognition result from the storage unit 40 by transmitting the scene recognition result to the storage unit 40 through the direct memory access unit 50.
In a possible example, as shown in fig. 1, the blind person assisting apparatus further includes a storage unit 40 and a direct memory access unit 50, and the storage unit 40 may include: one or any combination of a register and a cache, specifically, the cache is used for storing the calculation instructions (the first calculation instruction and the second calculation instruction); the register is used for storing the scene data acquired by the image acquisition unit 30; the cache is a scratch pad cache. The dma unit 50 is configured to read or store data and/or instructions from the storage unit 40, and specifically, send the read scene data to the input neuron cache 500, send the read calculation instruction (the first calculation instruction or the second calculation instruction) to the instruction cache 300, send the read weight to the weight cache 400, read the scene recognition result from the output neuron cache 600, and store the scene recognition result in the storage unit 40.
Referring to fig. 3, fig. 3 is a block diagram of a blind assisting method according to an embodiment of the present disclosure, where the method is applied to the blind assisting device, the blind assisting device includes a processing unit and a path planning unit, and the processing unit includes an arithmetic unit, a control unit, and a cache unit; the method comprises the following steps:
step S301, the control unit reads a first calculation instruction, and sends the first calculation instruction to the arithmetic unit.
Step S302, the operation unit obtains target scene data and a weight, and corresponding operation is carried out according to the target scene data, the weight and the first calculation instruction to obtain a scene recognition result.
Step S303, the path planning unit plans a path according to the scene recognition result to obtain a target path, and executes a preset prompt operation on the target path.
In a possible example, the arithmetic unit comprises a master processing circuit and at least one slave processing circuit;
the control unit analyzes the first calculation instruction to obtain a plurality of first sub-calculation instructions, and sends the plurality of first sub-calculation instructions to the main processing circuit;
the main processing circuit executes preorder processing on the target scene data, the target scene data are combined into input data, the input data are divided into a plurality of input data blocks, and the plurality of input data blocks and the plurality of first sub-calculation instructions are distributed to a slave processing circuit;
the slave processing circuit executes intermediate operation in parallel according to the received input data blocks and the first sub-calculation instructions to obtain intermediate results, and transmits the intermediate results to the master processing circuit;
and the main processing circuit executes subsequent processing on the received intermediate results to obtain an object type corresponding to the target scene data, and the object type is used as the scene recognition result.
In a possible example, performing path planning according to the scene recognition result, and obtaining a target path specifically includes:
planning a plurality of traveling routes according to the scene recognition result;
determining a category of objects on each of the number of travel routes;
scoring each traveling route according to the object type, wherein the object type and the scoring have a preset mapping relation;
and taking the travel route with the highest score as a target path.
In a possible example, the apparatus further comprises an image acquisition unit;
the image acquisition unit is used for acquiring a scene image to obtain original scene data corresponding to the scene image;
the control unit is used for acquiring a second calculation instruction and sending the second calculation instruction to the arithmetic unit;
the operation unit is configured to obtain the original scene data and the weight, perform corresponding operation according to the original scene data, the weight, and a second calculation instruction to obtain an output result, determine a candidate region of the scene image according to the output result, mark original scene data corresponding to the scene image included in the candidate region as the target scene data, where the candidate region includes at least one object.
In a possible example, the processing unit further comprises an instruction cache, an input neuron cache, a weight cache, and an output neuron cache;
the instruction cache caches calculation instructions required in operation;
the input neuron caches the cached scene data and sends the cached scene data to the arithmetic unit;
the weight value caches the weight value, and the cached weight value is sent to the operation unit;
the output neuron caches the scene recognition result sent by the operation unit and sends the cached scene recognition result to the path planning unit.
The application also discloses a GRU device, which comprises one or more computing devices mentioned in the application, and is used for acquiring data to be operated and control information from other processing devices, executing specified GRU operation, and transmitting the execution result to peripheral equipment through an I/O interface. Peripheral devices such as cameras, displays, mice, keyboards, network cards, wifi interfaces, servers. When more than one computing device is included, the computing devices may be linked and transmit data through a specific structure, such as through a PCIE bus, to support larger-scale convolutional neural network training operations. At this time, the same control system may be shared, or there may be separate control systems; the memory may be shared or there may be separate memories for each accelerator. In addition, the interconnection mode can be any interconnection topology.
The GRU device has high compatibility and is connected with various types of servers through PCIE interfaces.
The application also discloses a combined processing device which comprises the GRU device, the universal interconnection interface and other processing devices. The GRU operation device interacts with other processing devices to jointly complete the operation designated by the user. Fig. 4 is a schematic view of a combined treatment apparatus.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the GRU operation device and external data and control, and include data transportation to finish basic control of starting, stopping and the like of the GRU operation device; other processing devices may also cooperate with the GRU computing device to perform computing tasks.
And the universal interconnection interface is used for transmitting data and control instructions between the GRU device and other processing devices. The GRU device acquires required input data from other processing devices and writes the input data into a storage device on a GRU device chip; control instructions can be obtained from other processing devices and written into a control cache on a GRU device chip; the data in the memory module of the GRU device can also be read and transmitted to other processing devices.
Optionally, as shown in fig. 5, the structure may further include a storage device, and the storage device is connected to the GRU device and the other processing device, respectively. The storage device is used for storing data in the GRU device and the other processing device, and is particularly suitable for data which is required to be calculated and cannot be stored in the internal storage of the GRU device or the other processing device.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
In some embodiments, a chip is also claimed, which includes the GRU apparatus or the combined processing apparatus.
In some embodiments, a chip package structure is provided, which includes the above chip.
In some embodiments, a board card is provided, which includes the above chip package structure. Referring to fig. 6, fig. 6 provides a card that may include other kit components in addition to the chip 389, including but not limited to: memory device 390, interface device 391 and control device 392;
the memory device 390 is connected to the chip in the chip package structure through a bus for storing data. The memory device may include a plurality of groups of memory cells 393. Each group of the storage units is connected with the chip through a bus. It is understood that each group of the memory cells may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM. In one embodiment, the storage device may include 4 sets of the storage unit. Each group of the memory cells may include a plurality of DDR4 particles (chips). In one embodiment, the chip may internally include 4 72-bit DDR4 controllers, and 64 bits of the 72-bit DDR4 controller are used for data transmission, and 8 bits are used for ECC check. It can be understood that when DDR4-3200 particles are adopted in each group of memory cells, the theoretical bandwidth of data transmission can reach 25600 MB/s.
In one embodiment, each group of the memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. And a controller for controlling DDR is arranged in the chip and is used for controlling data transmission and data storage of each memory unit.
The interface device is electrically connected with a chip in the chip packaging structure. The interface device is used for realizing data transmission between the chip and an external device (such as a server or a computer). For example, in one embodiment, the interface device may be a standard PCIE interface. For example, the data to be processed is transmitted to the chip by the server through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface transmission is adopted, the theoretical bandwidth can reach 16000 MB/s. In another embodiment, the interface device may also be another interface, and the present application does not limit the concrete expression of the other interface, and the interface unit may implement the switching function. In addition, the calculation result of the chip is still transmitted back to an external device (e.g., a server) by the interface device.
The control device is electrically connected with the chip. The control device is used for monitoring the state of the chip. Specifically, the chip and the control device may be electrically connected through an SPI interface. The control device may include a single chip Microcomputer (MCU). The chip may include a plurality of processing chips, a plurality of processing cores, or a plurality of processing circuits, and may carry a plurality of loads. Therefore, the chip can be in different working states such as multi-load and light load. The control device can realize the regulation and control of the working states of a plurality of processing chips, a plurality of processing andor a plurality of processing circuits in the chip.
In some embodiments, an electronic device is provided that includes the above board card.
The electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet computer, an intelligent terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The blind auxiliary device is characterized by comprising a processing unit and a path planning unit, wherein the processing unit comprises an arithmetic unit and a control unit;
the control unit is used for reading a first calculation instruction and sending the first calculation instruction to the arithmetic unit;
the operation unit is used for obtaining target scene data and a weight, and carrying out corresponding operation according to the target scene data, the weight and the first calculation instruction to obtain a scene identification result;
and the path planning unit is used for planning a path according to the scene recognition result to obtain a target path and executing preset prompt operation on the target path.
2. The apparatus of claim 1,
the arithmetic unit comprises a main processing circuit and at least one slave processing circuit;
the control unit is further configured to analyze the first calculation instruction to obtain a plurality of first sub-calculation instructions, and send the plurality of first sub-calculation instructions to the main processing circuit;
the main processing circuit is configured to perform preamble processing on the target scene data, form the target scene data into input data, split the input data into a plurality of input data blocks, and distribute the plurality of input data blocks and the plurality of first sub-computation instructions to a slave processing circuit;
the slave processing circuit is used for executing intermediate operation in parallel according to the received input data blocks and the first sub-calculation instructions to obtain a plurality of intermediate results, and transmitting the intermediate results to the master processing circuit;
the main processing circuit is configured to perform subsequent processing on the received multiple intermediate results to obtain an object class corresponding to the target scene data, and use the object class as the scene recognition result.
3. The device according to claim 1 or 2,
when the path planning is performed according to the scene recognition result to obtain the target path, the path planning unit is specifically configured to:
planning a plurality of traveling routes according to the scene recognition result;
determining a category of objects on each of the number of travel routes;
and determining the score of each travelling route according to the object category on each travelling route, and taking the travelling route with the highest score as a target path, wherein the object category and the score have a preset mapping relation.
4. The apparatus according to any one of claims 1-3, further comprising an image acquisition unit;
the image acquisition unit is used for acquiring a scene image to obtain original scene data corresponding to the scene image;
the control unit is used for reading a second calculation instruction and sending the second calculation instruction to the arithmetic unit;
the operation unit is used for obtaining the original scene data and the weight, performing corresponding operation according to the original scene data, the weight and a second calculation instruction to obtain an output result, determining a candidate region of the scene image according to the output result, and marking the original scene data corresponding to the scene image contained in the candidate region as target scene data, wherein the candidate region comprises at least one object.
5. The apparatus according to any one of claims 1 to 4,
the processing unit also comprises an instruction cache, an input neuron cache, a weight cache and an output neuron cache;
the instruction cache is used for caching calculation instructions required in operation;
the input neuron cache is used for caching scene data and sending the cached scene data to the arithmetic unit;
the weight cache is used for caching the weight and sending the cached weight to the operation unit;
and the output neuron cache is used for caching the scene recognition result and sending the cached scene recognition result to the path planning unit.
6. A neural network chip, comprising the apparatus of any one of claims 1-5.
7. The utility model provides a board card, its characterized in that, the board card includes: a memory device, an interface apparatus and a control device and the neural network chip of claim 6;
wherein, the neural network chip is respectively connected with the storage device, the control device and the interface device;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the neural network chip and external equipment;
and the control device is used for monitoring the state of the neural network chip.
8. The blind person assisting method is characterized by being applied to a blind person assisting device, wherein the blind person assisting device comprises a processing unit and a path planning unit, and the processing unit comprises an arithmetic unit and a control unit; the method comprises the following steps:
the control unit reads a first calculation instruction and sends the first calculation instruction to the arithmetic unit;
the operation unit obtains target scene data and a weight, and performs corresponding operation according to the target scene data, the weight and the first calculation instruction to obtain a scene identification result;
and the path planning unit carries out path planning according to the scene recognition result to obtain a target path and executes preset prompt operation on the target path.
9. The method of claim 8,
the arithmetic unit comprises a main processing circuit and at least one slave processing circuit;
the control unit analyzes the first calculation instruction to obtain a plurality of first sub-calculation instructions, and sends the plurality of first sub-calculation instructions to the main processing circuit;
the main processing circuit executes preorder processing on the target scene data, the target scene data are combined into input data, the input data are divided into a plurality of input data blocks, and the plurality of input data blocks and the plurality of first sub-calculation orders are distributed to a slave processing circuit;
the slave processing circuit executes intermediate operation in parallel according to the received input data blocks and the first sub-calculation instructions to obtain intermediate results, and transmits the intermediate results to the master processing circuit;
and the main processing circuit executes subsequent processing on the received intermediate results to obtain an object type corresponding to the target scene data, and the object type is used as the scene recognition result.
10. The method according to claim 8 or 9, wherein the planning a path according to the scene recognition result to obtain a target path specifically comprises:
planning a plurality of traveling routes according to the scene recognition result;
determining a category of objects on each of the number of travel routes;
and determining the score of each travelling route according to the object category on each travelling route, and taking the travelling route with the highest score as a target path, wherein the object category and the score have a preset mapping relation.
CN201910316954.6A 2019-04-19 2019-04-19 Information processing apparatus, method and related product Pending CN111832356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910316954.6A CN111832356A (en) 2019-04-19 2019-04-19 Information processing apparatus, method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910316954.6A CN111832356A (en) 2019-04-19 2019-04-19 Information processing apparatus, method and related product

Publications (1)

Publication Number Publication Date
CN111832356A true CN111832356A (en) 2020-10-27

Family

ID=72915007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910316954.6A Pending CN111832356A (en) 2019-04-19 2019-04-19 Information processing apparatus, method and related product

Country Status (1)

Country Link
CN (1) CN111832356A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107206592A (en) * 2015-01-26 2017-09-26 杜克大学 Special purpose robot's motion planning hardware and production and preparation method thereof
CN107402018A (en) * 2017-09-21 2017-11-28 北京航空航天大学 A kind of apparatus for guiding blind combinatorial path planing method based on successive frame
CN107479559A (en) * 2017-09-22 2017-12-15 中国地质大学(武汉) A kind of market blindmen intelligent shopping guide system and method
CN108180901A (en) * 2017-12-08 2018-06-19 深圳先进技术研究院 Indoor navigation method, device, robot and the storage medium of blind-guidance robot
US20180168911A1 (en) * 2016-12-21 2018-06-21 Inventec (Pudong) Technology Corporation System For Guiding Blind And Method Thereof
CN108844545A (en) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 A kind of auxiliary traveling method and system based on image recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107206592A (en) * 2015-01-26 2017-09-26 杜克大学 Special purpose robot's motion planning hardware and production and preparation method thereof
US20180168911A1 (en) * 2016-12-21 2018-06-21 Inventec (Pudong) Technology Corporation System For Guiding Blind And Method Thereof
CN107402018A (en) * 2017-09-21 2017-11-28 北京航空航天大学 A kind of apparatus for guiding blind combinatorial path planing method based on successive frame
CN107479559A (en) * 2017-09-22 2017-12-15 中国地质大学(武汉) A kind of market blindmen intelligent shopping guide system and method
CN108180901A (en) * 2017-12-08 2018-06-19 深圳先进技术研究院 Indoor navigation method, device, robot and the storage medium of blind-guidance robot
CN108844545A (en) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 A kind of auxiliary traveling method and system based on image recognition

Similar Documents

Publication Publication Date Title
US11710041B2 (en) Feature map and weight selection method and accelerating device
US10762164B2 (en) Vector and matrix computing device
CN109543832B (en) Computing device and board card
JP6761134B2 (en) Processor controllers, methods and devices
CN110909630B (en) Abnormal game video detection method and device
US11111785B2 (en) Method and device for acquiring three-dimensional coordinates of ore based on mining process
JP2020502654A (en) Human-machine hybrid decision-making method and apparatus
CN109878512A (en) Automatic Pilot control method, device, equipment and computer readable storage medium
CN111126590B (en) Device and method for artificial neural network operation
CN109670581B (en) Computing device and board card
CN111161705B (en) Voice conversion method and device
CN113821720A (en) Behavior prediction method and device and related product
CN111066058A (en) System and method for low power real-time object detection
CN114065900A (en) Data processing method and data processing device
CN111199276B (en) Data processing method and related product
CN112734827A (en) Target detection method and device, electronic equipment and storage medium
CN111832356A (en) Information processing apparatus, method and related product
CN111723920B (en) Artificial intelligence computing device and related products
CN210781107U (en) Vehicle-mounted data processing terminal and system
CN111260046B (en) Operation method, device and related product
CN114281937A (en) Training method of nested entity recognition model, and nested entity recognition method and device
CN111260070B (en) Operation method, device and related product
CN111258641B (en) Operation method, device and related product
CN110472734B (en) Computing device and related product
US20230410338A1 (en) Method for optimizing depth estimation model, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination