CN113269317A - Pulse neural network computing array - Google Patents

Pulse neural network computing array Download PDF

Info

Publication number
CN113269317A
CN113269317A CN202110400723.0A CN202110400723A CN113269317A CN 113269317 A CN113269317 A CN 113269317A CN 202110400723 A CN202110400723 A CN 202110400723A CN 113269317 A CN113269317 A CN 113269317A
Authority
CN
China
Prior art keywords
neural network
pulse
electrically connected
array
membrane potential
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110400723.0A
Other languages
Chinese (zh)
Inventor
李丽
沈思睿
傅玉祥
陈沁雨
徐瑾
王心沅
何书专
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202110400723.0A priority Critical patent/CN113269317A/en
Publication of CN113269317A publication Critical patent/CN113269317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a pulse neural network computing array which can support convolution-pooling continuous operation, can support parallel reasoning operation of a pulse neural network and improves the execution efficiency in the algorithm reasoning process. The invention comprises a pulse neural network computing cluster formed by a plurality of pulse neural network computing units, wherein each pulse neural network computing unit comprises a membrane potential accumulator, a pulse transmitter, a pooling buffer area and a pooling comparator. Wherein the membrane potential accumulator is electrically connected with the pulse transmitter, and the pulse transmitter is electrically connected with the pooling buffer and the pooling comparator. The membrane potential accumulator is used for accumulating an input pulse sequence; the pulse emitter judges whether to emit pulse to the next stage according to the membrane potential input by the accumulator; the pooling buffer area counts and buffers the pulses of the pulse emitter; the pooled comparator performs a comparison operation on the inputs of the buffer.

Description

Pulse neural network computing array
Technical Field
The invention relates to the field of hardware implementation in the field of neural network computation, in particular to a pulse neural network computation array.
Background
Spiking Neural Networks (SNNs) use Spiking neurons as computational units, and can mimic the information encoding and processing processes of the human brain. Unlike the traditional deep neural network which uses specific values for information transmission, the pulse neural network transmits information through the emission time of each pulse in the pulse sequence, and can provide sparse and high-precision computing power. The pulse neuron accumulates input to the membrane voltage, and when a specific threshold is reached, pulse firing is performed, enabling event-driven calculations. Due to the sparsity of impulse events and the event-driven computational form, the impulse neural network can provide excellent energy utilization efficiency and is the neural network structure closest to the biological brain at present.
According to the characteristics of the data of the impulse neural network, the traditional neural network computing array structure cannot utilize the sparsity of the traditional neural network computing array structure to the maximum extent, and an operational array needs to be designed according to the computing characteristics of the impulse neural network. Most of the pulse neural network computing arrays disclosed at present only support fully-connected and convolution-structured operations. However, the defects of the designs are that the support of the pooling layer in the impulse neural network is poor, and the completion of the pooling operation in hardware requires large expenditure. The method cannot adapt to various network structures, and the array universality is poor.
Due to the event-driven characteristics of the impulse neural network algorithm model, the convergence rate needs to be accelerated and the membrane potential needs to be refreshed in the inference process. How to design a computing structure to adapt to the refresh rate of an event and improve the speed of reasoning convergence is also part of the research of a pulse neural network computing unit.
In summary, how to design a computational array of a pulse neural network that has better support for each layer of the pulse neural network, high resource utilization rate, low computation delay, and low overhead is a problem that needs to be solved urgently in the technical research of the existing pulse neural network.
Disclosure of Invention
The purpose of the invention is as follows: the pulse neural network computing array aims to overcome the defects of the prior art, can better support a pulse neural network pooling layer, comprehensively considers the aspects of hardware implementation precision, area power consumption and operation time delay, and is provided.
The technical scheme is as follows: aiming at the problems, the invention provides a pulse neural network computing array which can support continuous operation of convolution and pooling, can support parallel reasoning operation of the pulse neural network and improve the execution efficiency in the algorithm reasoning process. The invention comprises a pulse neural network computing cluster formed by a plurality of pulse neural network computing units, wherein each pulse neural network computing unit comprises a membrane potential accumulator, a pulse transmitter, a pooling buffer area and a pooling comparator. Wherein the membrane potential accumulator is electrically connected with the pulse transmitter, and the pulse transmitter is electrically connected with the pooling buffer and the pooling comparator. The membrane potential accumulator is used for accumulating an input pulse sequence; the pulse emitter judges whether to emit pulse to the next stage according to the membrane potential input by the accumulator; the pooling buffer area counts and buffers the pulses of the pulse emitter; the pooled comparator performs a comparison operation on the inputs of the buffer.
In a further embodiment, the membrane potential accumulator comprises at least one membrane potential loading unit and at least one fixed-point accumulator, the membrane potential loading unit and the fixed-point accumulator being electrically connected to each other.
In a further embodiment, the pulse emitter comprises at least one fixed-point comparator and at least one fixed-point subtractor, the fixed-point comparator and the fixed-point subtractor being electrically connected to each other.
In a further embodiment, the pooled buffer comprises at least one pooled count load unit, at least one counter, and at least one first-in-first-out data buffer, the pooled count load unit, the counter, and the first-in-first-out data buffer being electrically connected to each other;
the pooling comparator comprises at least one fixed point number comparator and at least one data manager, and the fixed point number comparator and the data manager are electrically connected with each other.
In a further embodiment, the membrane potential loading unit is started simultaneously with the fixed-point accumulator, the loading process being overridden by the accumulation process.
In a further embodiment, the output of the counter is electrically connected to the FIFO data buffer and the pool comparator.
In a further embodiment, the data manager includes a first-in first-out data buffer reading unit, a packet parsing unit, and a data allocation unit; the FIFO data buffer reading unit is electrically connected with the data packet analyzing unit, and the data packet analyzing unit is electrically connected with the data distributing unit.
In a further embodiment, each of the spiking neural network computing units comprises an input port and an output port, the input port being electrically connected to the output port via the membrane potential accumulator, the pulse transmitter, the pooled buffer and the pooled comparator.
In a further embodiment, a plurality of spiking neural network computing units form a spiking neural network computing cluster, and the spiking neural network computing units within the cluster share the same set of control logic.
In a further embodiment, the inputs of the spiking neural network-specific computational array and the inputs of the spiking neural network computational cluster are electrically connected to each other, and the outputs of the spiking neural network computational cluster and the outputs of the spiking neural network-specific computational array are electrically connected to each other.
Has the advantages that: the special calculation array for the impulse neural network can realize various different operation modes through external configuration signals. In addition, the invention realizes the parallel calculation of the impulse neural network by the calculation unit structure of the impulse neural network of 8x8, and has good operation flexibility and hardware resource utilization rate. The accumulation method fully utilizes the self sparsity characteristic of the impulse neural network, solves the problem of a large amount of redundancy in the traditional calculation array, and improves the operation efficiency of the impulse neural network.
Drawings
FIG. 1 is a schematic structural diagram of a spiking neural network computational array according to the present invention.
FIG. 2 is a schematic structural diagram of a computing unit of the spiking neural network according to the present invention.
FIG. 3 is a schematic diagram of the structure of the membrane potential accumulator according to the present invention.
Fig. 4 is a schematic diagram of the structure of the pulse emitter of the present invention.
FIG. 5 is a schematic diagram of the structure of the pooling buffer of the present invention.
FIG. 6 is a schematic diagram of a pooled comparator according to the present invention.
FIG. 7 is a schematic diagram of the structure of a single convolutional layer calculation of the impulse neural network.
FIG. 8 is a schematic diagram of the structure of convolutional-pooling layer continuous computation of the impulse neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1, SNPU in fig. 1 represents a spiking neural network computing unit, and SNPU CLUSTER represents a spiking neural network computing unit CLUSTER. The invention discloses a pulse neural network computing array, which comprises a pulse neural network computing cluster formed by a plurality of pulse neural network computing units. A single pulse neural network computing unit can perform computing processing on original data, the intra-cluster units can compute different rows of each image in parallel, and the inter-cluster units can compute different convolution patterns of the same image in parallel. The calculation array in this embodiment includes 8 impulse neural network calculation clusters, and each cluster includes 8 impulse neural network calculation units, so that parallel processing of 64-channel data can be realized. As shown in FIG. 2, the pulse neural network computing unit of the present invention comprises a membrane potential accumulator, a pulse transmitter, a pooled buffer and a pooled comparator. Wherein the membrane potential accumulator is electrically connected with the pulse transmitter, and the pulse transmitter is electrically connected with the pooling buffer and the pooling comparator.
Describing the above components in further detail, specifically, in conjunction with fig. 3, the membrane potential accumulator is used for performing an accumulation operation on an input pulse sequence; specifically, the membrane potential accumulator comprises a membrane potential loading unit and a fixed-point accumulator. The membrane potential loading unit comprises an initialization input port, a loading enabling port, a membrane potential loading port and a membrane potential register.
The membrane potential loading unit resets the membrane potential register to 0 according to the initialization input port of 1 'b 1, and reads the value of the membrane potential loading port into the register according to the enable signal of the loading enable port when the initialization input port is 1' b 0. The fixed-point accumulator is provided with 1 8-bit weight input port, 1-bit input enabling port and 1 8-bit membrane potential output port. The fixed-point accumulator may accumulate the value of the membrane potential register and output a membrane potential value of 8bit in the case where the input enable port is 1' b 1.
As shown in fig. 4, the pulse transmitter is configured to determine whether to transmit a pulse to the next stage according to the value of the membrane potential input by the accumulator. Specifically, the pulse transmitter includes a fixed-point comparator and a fixed-point subtractor. The fixed point comparator comprises two 8-bit inputs and a 1-bit output. The fixed point comparator compares the input membrane potential value with an emission threshold value, and when the membrane potential value is larger than the emission threshold value, an output signal of the comparator is pulled high. When the membrane potential is smaller than the emission threshold, the output signal of the comparator is not pulled high. The fixed point subtractor comprises two 8-bit inputs and a 1-bit output. When the output of the comparator is pulled high, the fixed point subtracter performs fixed point subtraction on the value of the membrane potential and outputs the membrane potential value after the subtraction of the threshold. The membrane potential threshold can be 32 in design, and can be configured.
As shown in connection with fig. 5, the pooled buffer is used to count and buffer pulses from the pulse emitters. Specifically, the pooled buffer comprises a pooled count load unit, a counter, and a first-in-first-out data buffer. The pooling counting loading unit comprises 1bit initialization input port, a 1bit loading enabling port and an 8bit pooling counting loading port. The pooled count load unit resets the pooled counter to 0 when the initialization input port is 1 'b 1, and reads the value of the pooled count load port into the counter according to the enable signal of the load enable port when the initialization input port is 1' b 0. The counter has a 1bit input port and an 8bit output port. The counter increases the count according to the pulse transmission condition. The current pooling count is output after each time step. The first-in first-out data buffer is provided with a 1-bit input enabling port, a 9-bit input port, a 1-bit output enabling port and a 9-bit output port. The data format of the FIFO data buffer is { transmit state (state), pool count (pool _ cnt) }. The fifo data buffer has a depth of 128, which is greater than the length of a line of the image. The counter outputs 9bit data to the pooled comparator and the first-in first-out data buffer simultaneously.
As shown in fig. 6, the pooling comparator is used for performing a comparison operation on the input of the buffer by the pooling comparator to find the maximum value in the input image mask. In particular, the pooled comparator comprises a data manager and a fixed point comparator. The data manager is connected with the output of the pooling buffer area, and reads the pooling counting and transmitting information of the upper and lower lines simultaneously, wherein the pooling counting and transmitting information comprises two 9-bit outputs of the first-in first-out buffer and two outputs of the pooling comparator. The fixed point number comparator has 4 8bit inputs and 12 bit output. The comparator can compare 4 8-bit inputs with each other to find out the maximum value, and the maximum value is expressed by 2-bit output. The 4 8-bit numbers comprise two pooled counts sequentially taken out of the fifo buffer and two pooled counts directly output by the pooled counter. Comparison refers to finding the maximum of 4 numbers. The 4 pixels are two lines of adjacent pixels in one image.
The impulse neural network computing array can realize various different operation modes through external configuration signals. If the mode select signal is set to 1' b0, the array feeds the output of the pulse generator directly into the output port of the array for computation by the next layer of the network (see FIG. 7). The output of the pulse generator is determined by considering the calculated value of the membrane potential.
In the mode 0, the membrane potential and the output of the pulse generator need only be calculated. If the mode select signal is set to 1' b1, the output of the pulse generator in the array will be sent to the pooling buffer, and the pooled output will be sent to the output port of the array after the pooled layer computation is completed by the pooling buffer and the pooled comparator (see FIG. 8). The membrane potential value is calculated first, and then the output of the layer is sent to a pooling buffer area and a pooling comparator to calculate the maximum value in the mask. Mode 1 then continues to send the membrane potential and the pulse generator values to the pooling unit for calculation. At the same time the membrane potential will be output and stored.
According to the impulse neural network computing array, the impulse neural network computing unit structure of 8x8 is adopted, the parallel computing of the impulse neural network is realized, and the impulse neural network computing array has good computing flexibility and hardware resource utilization rate. The accumulation method fully utilizes the self sparsity characteristic of the impulse neural network, solves the problem of a large amount of redundancy in the traditional calculation array, and improves the operation efficiency of the impulse neural network.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the details of the embodiments, and various equivalent modifications can be made within the technical spirit of the present invention, and the scope of the present invention is also within the scope of the present invention.

Claims (10)

1. An impulse neural network computing array is characterized by comprising an impulse neural network computing cluster formed by a plurality of impulse neural network computing units;
the pulse neural network computing unit comprises a membrane potential accumulator and is used for performing accumulation operation on an input pulse sequence;
the pulse emitter is electrically connected with the membrane potential accumulator and judges whether to emit pulses to the next stage according to the membrane potential input by the accumulator;
the pooling buffer area is electrically connected with the pulse emitter and is used for counting and caching pulses of the pulse emitter;
and the pooling comparator is electrically connected with the pulse emitter and is used for carrying out comparison operation on the input of the buffer area.
2. The array of claim 1, wherein the membrane potential accumulators comprise at least one membrane potential loading unit and at least one fixed-point accumulator, and wherein the membrane potential loading unit and the fixed-point accumulator are electrically connected to each other.
3. The array of claim 1, wherein the pulse transmitter comprises at least one fixed-point comparator and at least one fixed-point subtractor, the fixed-point comparator and the fixed-point subtractor being electrically connected to each other.
4. The array of claim 1, wherein the pooled buffer comprises at least one pooled count load unit, at least one counter, and at least one first-in-first-out data buffer, the pooled count load unit, the counter, and the first-in-first-out data buffer being electrically connected to each other;
the pooling comparator comprises at least one fixed point number comparator and at least one data manager, and the fixed point number comparator and the data manager are electrically connected with each other.
5. The array of claim 2, wherein the membrane potential loading unit is activated simultaneously with the fixed-point accumulator, and the loading process is overridden by the accumulation process.
6. The array of claim 4, wherein the output of the counter is electrically connected to the FIFO data buffer and the pool comparator.
7. The array of claim 4, wherein the data manager comprises a FIFO data buffer reading unit, a packet parsing unit, and a data distribution unit; the FIFO data buffer reading unit is electrically connected with the data packet analyzing unit, and the data packet analyzing unit is electrically connected with the data distributing unit.
8. The array of any one of claims 1 to 7, wherein each of the PULSE NETWORK COMPUTING UNITs comprises an input port and an output port, the input port and the output port being electrically connected to each other through the membrane potential accumulator, the pulse transmitter, the pooling buffer and the pooling comparator.
9. The array of claim 8, wherein the plurality of spiking neural network computing units form a spiking neural network computing cluster, and the spiking neural network computing units in the cluster share the same set of control logic.
10. The array of claim 9, wherein inputs of the array of spiking neural network-specific computations are electrically connected to inputs of the cluster of spiking neural network computations, and outputs of the cluster of spiking neural network computations are electrically connected to outputs of the array of spiking neural network-specific computations.
CN202110400723.0A 2021-04-14 2021-04-14 Pulse neural network computing array Pending CN113269317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400723.0A CN113269317A (en) 2021-04-14 2021-04-14 Pulse neural network computing array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400723.0A CN113269317A (en) 2021-04-14 2021-04-14 Pulse neural network computing array

Publications (1)

Publication Number Publication Date
CN113269317A true CN113269317A (en) 2021-08-17

Family

ID=77229076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400723.0A Pending CN113269317A (en) 2021-04-14 2021-04-14 Pulse neural network computing array

Country Status (1)

Country Link
CN (1) CN113269317A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103149A1 (en) * 2021-12-06 2023-06-15 成都时识科技有限公司 Pulse event decision-making apparatus and method, chip, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075339A1 (en) * 2016-09-09 2018-03-15 SK Hynix Inc. Neural network hardware accelerator architectures and operating method thereof
CN109816102A (en) * 2017-11-22 2019-05-28 英特尔公司 Reconfigurable nerve synapse core for spike neural network
CN110046695A (en) * 2019-04-09 2019-07-23 中国科学技术大学 A kind of configurable high degree of parallelism spiking neuron array
CN110348564A (en) * 2019-06-11 2019-10-18 中国人民解放军国防科技大学 SCNN reasoning acceleration device based on systolic array, processor and computer equipment
CN111325321A (en) * 2020-02-13 2020-06-23 中国科学院自动化研究所 Brain-like computing system based on multi-neural network fusion and execution method of instruction set
CN112232486A (en) * 2020-10-19 2021-01-15 南京宁麒智能计算芯片研究院有限公司 Optimization method of YOLO pulse neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075339A1 (en) * 2016-09-09 2018-03-15 SK Hynix Inc. Neural network hardware accelerator architectures and operating method thereof
CN109816102A (en) * 2017-11-22 2019-05-28 英特尔公司 Reconfigurable nerve synapse core for spike neural network
CN110046695A (en) * 2019-04-09 2019-07-23 中国科学技术大学 A kind of configurable high degree of parallelism spiking neuron array
CN110348564A (en) * 2019-06-11 2019-10-18 中国人民解放军国防科技大学 SCNN reasoning acceleration device based on systolic array, processor and computer equipment
CN111325321A (en) * 2020-02-13 2020-06-23 中国科学院自动化研究所 Brain-like computing system based on multi-neural network fusion and execution method of instruction set
CN112232486A (en) * 2020-10-19 2021-01-15 南京宁麒智能计算芯片研究院有限公司 Optimization method of YOLO pulse neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAURIZIO CAPRA;BEATRICE BUSSOLINO;ALBERTO MARCHISIO;GUIDO MASERA;MAURIZIO MARTINA;MUHAMMAD SHAFIQUE: "Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead", IEEE ACCESS ( VOLUME: 8), 24 November 2020 (2020-11-24) *
赵亮: "基于忆阻的脉冲神经网络设计及其在图像分类中的应用", 中国优秀硕士学位论文全文数据库, 15 March 2020 (2020-03-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103149A1 (en) * 2021-12-06 2023-06-15 成都时识科技有限公司 Pulse event decision-making apparatus and method, chip, and electronic device

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
Liu et al. Neu-NoC: A high-efficient interconnection network for accelerated neuromorphic systems
CN109447241B (en) Dynamic reconfigurable convolutional neural network accelerator architecture for field of Internet of things
CN109167671B (en) Quantum key distribution service-oriented balanced load scheduling method for distribution communication system
CN112465110B (en) Hardware accelerator for convolution neural network calculation optimization
CN109740739A (en) Neural computing device, neural computing method and Related product
Shi et al. Toward energy-efficient federated learning over 5g+ mobile devices
CN111626403B (en) Convolutional neural network accelerator based on CPU-FPGA memory sharing
CN113269317A (en) Pulse neural network computing array
US11875426B2 (en) Graph sampling and random walk acceleration method and system on GPU
CN104580503A (en) Efficient dynamic load balancing system and method for processing large-scale data
CN108805285B (en) Convolutional neural network pooling unit design method
CN112418396A (en) Sparse activation perception type neural network accelerator based on FPGA
CN115860080B (en) Computing core, accelerator, computing method, apparatus, device, medium, and system
US20230128421A1 (en) Neural network accelerator
CN114065923A (en) Compression method, system and accelerating device of convolutional neural network
CN214586992U (en) Neural network accelerating circuit, image processor and three-dimensional imaging electronic equipment
CN112346704A (en) Full-streamline type multiply-add unit array circuit for convolutional neural network
CN113452625A (en) Deep reinforcement learning-based unloading scheduling and resource allocation method
CN116882467B (en) Edge-oriented multimode configurable neural network accelerator circuit structure
CN112163673A (en) Population routing method for large-scale brain-like computing network
CN115796239B (en) Device for realizing AI algorithm architecture, convolution computing device, and related methods and devices
WO2023208243A1 (en) Weight storage method, apparatus and system, weight transmission method, apparatus and system, weight calculation method, apparatus and system, and device
CN110555519A (en) Low-complexity convolutional neural network based on symbol random computation
Chi et al. Hardware Architecture Design of the Deep-learning-based Machine Vision Chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination