CN215182040U - Feature detection device and AI edge intelligent equipment - Google Patents

Feature detection device and AI edge intelligent equipment Download PDF

Info

Publication number
CN215182040U
CN215182040U CN202120185920.0U CN202120185920U CN215182040U CN 215182040 U CN215182040 U CN 215182040U CN 202120185920 U CN202120185920 U CN 202120185920U CN 215182040 U CN215182040 U CN 215182040U
Authority
CN
China
Prior art keywords
image information
video image
dimensional video
chip
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202120185920.0U
Other languages
Chinese (zh)
Inventor
李宇
王思嘉
刘宋平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aibo Communication Co ltd
Original Assignee
Shenzhen Aibo Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aibo Communication Co ltd filed Critical Shenzhen Aibo Communication Co ltd
Priority to CN202120185920.0U priority Critical patent/CN215182040U/en
Application granted granted Critical
Publication of CN215182040U publication Critical patent/CN215182040U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The utility model discloses a feature detection device and AI edge intelligent equipment, wherein, the feature detection device includes a depth camera, a plane camera and an electric control component, the depth camera collects original pictures and outputs three-dimensional video image information; the method comprises the following steps that a plane camera collects original pictures and outputs two-dimensional video image information; the electric control assembly comprises an electric control board and an AI chip arranged on the electric control board, and the AI chip is electrically connected with the depth camera and the plane camera; and the AI chip processes the three-dimensional video image information and the two-dimensional video image information to convert the three-dimensional video image information and the two-dimensional video image information into characteristic information of a target object and outputs the characteristic information to an external terminal. The utility model discloses can detect the more characteristic of object.

Description

Feature detection device and AI edge intelligent equipment
Technical Field
The utility model relates to a technical field of feature identification, in particular to feature detection device and AI edge smart machine.
Background
The edge artificial intelligence is one of the attractive new fields in the field of artificial intelligence, can enable intelligent equipment to quickly react to input under the condition that the intelligent equipment is not required to be connected to a cloud platform, and combines an artificial intelligence technology with an edge computing technology to enable an artificial intelligence algorithm to run on the equipment capable of performing edge computing.
The AI edge device refers to an edge device capable of performing artificial intelligence algorithm operations. Most AI edge devices on the market acquire two-dimensional pictures through a plane camera, and then acquire two-dimensional data after processing the two-dimensional pictures, so as to acquire the characteristics of an object to be detected, but the information contained in the two-dimensional data is monotonous, so that the object characteristics acquired by the device are few.
SUMMERY OF THE UTILITY MODEL
The utility model aims at providing a characteristic detection device and AI edge smart machine aims at can detecting the more characteristic of object.
In order to achieve the above object, the utility model provides a characteristic detection device, include:
the depth camera is used for collecting an original picture and outputting three-dimensional video image information;
the plane camera is used for collecting an original picture and outputting two-dimensional video image information;
the electric control assembly comprises an electric control plate and an AI chip arranged on the electric control plate, and the AI chip is electrically connected with the depth camera and the plane camera; and the AI chip processes the three-dimensional video image information and the two-dimensional video image information to convert the three-dimensional video image information and the two-dimensional video image information into characteristic information of a target object and outputs the characteristic information to an external terminal.
Optionally, a deep learning accelerator, an image processor and a signal processor are integrated inside the AI chip, and the signal processor is connected to the image processor and the deep learning accelerator respectively; the deep learning accelerator is used for processing the three-dimensional video image information;
the image processor is used for processing the processed three-dimensional video image information and the two-dimensional image information to obtain characteristic information of a target object;
and the signal processor outputs the characteristic information to an external terminal.
Optionally, a visual accelerator is further integrated inside the AI chip, the visual accelerator is connected to the image processor and the deep learning accelerator, the visual accelerator is connected to the depth camera and the plane camera, the visual accelerator is configured to decode and restore three-dimensional video image information output by the depth camera to three-dimensional data image information, and,
and decoding the two-dimensional video image information output by the plane camera to restore the two-dimensional video image information into two-dimensional data image information.
Optionally, the feature detecting device further includes a power management circuit disposed on the electric control board, where the power management circuit includes:
power input end, battery and power switching circuit, power switching circuit's first input with power input end connects, power switching circuit's second input with the battery is connected, power switching circuit's output with the AI chip the degree of depth camera with the plane camera is connected.
Optionally, the power switching circuit includes a first switch circuit and a second switch circuit, an input end of the first switch circuit is connected to the power input end, an output end of the first switch circuit is an output end of the power switching circuit, an input end of the second switch circuit is connected to the battery, and an output end of the second switch circuit is connected to an output end of the first switch circuit.
Optionally, the first switch circuit includes a first switch element, a second switch element and a first resistor, the second switch circuit includes a third switch element and a second resistor, a voltage input terminal of the first switch element is connected to the power input terminal, and a ground terminal is connected to ground; the voltage input end of the second switch element is connected with the power supply input end, the controlled end of the second switch element is connected with the output end of the first switch element, and the output end of the second switch element is the output end of the power supply switching circuit; the controlled end of the third switching element is connected with the output end of the second switching element, the voltage input end is connected with the battery, and the grounding end is connected with the second resistor in series and then is grounded; the first end of the first resistor is connected with the controlled end of the third switching element, and the second end of the first resistor is electrically connected with the output end of the first switching element.
Optionally, the power management circuit further includes a voltage stabilizing circuit, an input end of the voltage stabilizing circuit is connected to the first end of the first resistor, and an output end of the voltage stabilizing circuit is connected to the AI chip, the depth camera, and the plane camera.
Optionally, the feature detection apparatus further includes:
and the wireless communication module is arranged on the electric control board and electrically connected with the AI chip, and the wireless communication module is used for realizing the communication connection between the AI chip and an external terminal.
The utility model also provides an AI edge intelligent device, which comprises the above feature detection device; the characteristic detection device comprises a depth camera, a plane camera and an electric control assembly, wherein the depth camera collects an original picture and outputs three-dimensional video image information; the plane camera collects an original picture and outputs two-dimensional video image information; the electric control assembly comprises an electric control plate and an AI chip arranged on the electric control plate, and the AI chip is electrically connected with the depth camera and the plane camera; and the AI chip processes the three-dimensional video image information and the two-dimensional video image information to convert the three-dimensional video image information and the two-dimensional video image information into characteristic information of a target object and outputs the characteristic information to an external terminal.
Optionally, the AI-edge smart device further includes:
the shell is hollow, an accommodating cavity is formed in the shell, and the characteristic detection device is accommodated in the accommodating cavity;
the heat dissipation device comprises a heat dissipation plate in contact with the shell and a heat dissipation fan connected with the heat dissipation plate, and the heat dissipation plate is located outside the shell.
The utility model discloses technical scheme utilizes the cooperation of degree of depth camera and plane camera to shoot, acquires the three-dimensional coordinate of target object through the degree of depth camera, then superposes the picture that three-dimensional coordinate and plane camera obtained, confirms the characteristic information of target object, reaches the purpose that detects the more characteristic of object.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a logic block diagram of the feature detection apparatus of the present invention;
fig. 2 is a block diagram of the internal structure of the AI chip;
FIG. 3 is a circuit diagram of a power management circuit;
FIG. 4 is a side view of an AI edge smart device;
fig. 5 is a rear view of the AI edge smart device.
The reference numbers illustrate:
reference numerals Name (R) Reference numerals Name (R)
10 Depth camera 37 USB interface unit
20 Plane camera 38 RS232 interface unit
30 Electric control assembly 39 MIPI CSI interface unit
31 AI chip 40 Shell body
311 Deep learning accelerator 41 Outer casing
312 Image processor 42 Protective shell
313 Signal processor 50 Heat sink device
314 Visual accelerator 51 Heat radiation plate
32 Power management circuit 52 Heat radiation fan
321 Power input terminal 60 Mounting bracket
322 Battery with a battery cell 70 Antenna unit
323 Power supply switching circuit U1 Voltage stabilization chip
3231 First switch circuit Q1 A first switch element
3232 Second switch circuit Q2 Second switch element
324 Voltage stabilizing circuit Q3 Third switching element
33 Memory cell R1 A first resistor
34 Memory cell R2 Second resistance
35 Wireless communication module C1 First capacitor
36 Ethernet unit C2 Second capacitor
The objects, features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative efforts belong to the protection scope of the present invention.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In the present application, unless expressly stated or limited otherwise, the terms "connected" and "fixed" are to be construed broadly, e.g., "fixed" may be fixedly connected or detachably connected, or integrally formed; can be mechanically connected or connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meaning of the above terms in the present invention can be understood according to specific situations by those skilled in the art.
In addition, descriptions in the present application as to "first", "second", and the like are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions in the embodiments may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The utility model provides a characteristic detection device for detect more characteristics of target object.
Referring to fig. 1, in an embodiment of the present invention, the feature detecting apparatus includes:
the depth camera 10 collects an original picture and outputs three-dimensional video image information;
the plane camera 20 collects an original picture and outputs two-dimensional video image information;
the electric control assembly 30 comprises an electric control board and an AI chip 31 arranged on the electric control board, wherein the AI chip 31 is electrically connected with the depth camera 10 and the plane camera 20; the AI chip 31 processes the three-dimensional video image information and the two-dimensional video image information to convert into feature information of the target object, and outputs the feature information to an external terminal.
The plane camera 20 collects a picture and outputs an original video stream, i.e., the two-dimensional video image information. Present planar camera 20 generally can both be connected with the network, and after the connection network, the user can directly watch the current state in real time on background display equipment, according to required definition, can select the planar camera 20 that shows resolution ratio and be various resolutions such as 4K, 8K, and the planar camera 20 of 4K resolution ratio can be adopted to this embodiment option. The setting may be specifically performed according to application scenarios and requirements, and is not limited herein.
The depth camera 10 can obtain depth information of a target object, obtain two images of the target object from different positions by using a parallax principle, obtain three-dimensional geometric information of the target object by calculating a position deviation between corresponding points of the images, and output a depth video stream containing a three-dimensional image of the target object, namely the three-dimensional video image information. The three-dimensional feature point coordinates obtained by the depth camera 10 can obtain more features of the object than the two-dimensional features of the object obtained by the plane camera 20. By adopting the depth camera 10 and the plane camera 20 to collect people, objects and the like in the same scene at the same time, more information of the scene can be acquired, and the collected information is enriched.
The AI chip 31 can process information output from the depth camera 10 and the plane camera 20 and can process a large number of calculation tasks, thereby rapidly performing an AI algorithm having a 3D human behavior recognition detection algorithm and a human recognition algorithm based on a depth image. In this way, the AI chip 31 receives the image information acquired by the depth camera 10 and the plane camera 20, and performs image processing on the image information acquired by the depth camera 10 and the plane camera 20 based on the 3D human behavior recognition detection algorithm and the person recognition algorithm, so as to find out a dynamic target object in a dynamic scene, and convert the facial expression and the body movement of the target object into structured data.
Specifically, after the AI chip 31 receives the three-dimensional video image information sent by the depth camera 10 and the two-dimensional video image information sent by the plane camera 20, the AI chip 31 performs hard decoding processing on the three-dimensional video image information and the two-dimensional video image information to convert the three-dimensional video image information into three-dimensional data image information and convert the two-dimensional video image information into two-dimensional data image information, when the three-dimensional data image information appears in a target object, the three-dimensional data image information is calculated to obtain three-dimensional feature point coordinates of the target object, and then the feature point coordinates are superimposed into the two-dimensional video image information to obtain features of the target object. Characteristics of the target include, but are not limited to, behavioral actions, expressions, age, type of clothing worn. It can be understood that, according to the needs of practical application, the features of the corresponding target object are advanced, for example, the AI chip 31 of the embodiment can define specific human behaviors as needed to perform statistical early warning, and the double lens recognizes multiple human behaviors and simultaneously recognizes and early warns people counting of a large scene; and self-defined behavior alarms such as micro-expression, micro-motion, whether the face is shielded and the like are supported.
In some embodiments, the feature detection apparatus may further include a storage unit 33 and a memory unit 34. The storage unit 33 may be a FLASH memory chip, or may be other storage media capable of performing long-term storage, such as a hard disk and an SD card, and in this embodiment, an SD card is used. The storage unit 33 is connected to the AI chip 31, and the storage unit 33 is used for storing the system, the algorithm code, and some data results output after the AI chip 31 finishes the operation. The memory unit 34 is a memory in which power-down data is lost, and in this embodiment, a RAM memory is used, the memory unit 34 is connected to the AI chip 31 and the storage unit 33, and the memory unit 34 is used for temporarily storing an algorithm code called by the AI chip 31 to the storage unit 33 and intermediate data generated by the AI chip 31 in an operation process.
The utility model discloses a set up degree of depth camera 10 and gather original picture and export three-dimensional video image information to and gather original picture and export two-dimensional video image information through plane camera 20, and it is right through AI chip 31 three-dimensional video image information with two-dimensional video image information handles, converts the characteristic information of target object into, and will characteristic information exports to external terminal. Compare in current only adopt planar camera 20 to obtain the two-dimensional picture, then obtain two-dimensional data after handling the two-dimensional picture to remove the characteristic of acquireing the target object, but the information that two-dimensional data contained is comparatively dull, and the object feature that leads to this equipment can obtain is less, the utility model discloses adopt degree of depth camera 10 and planar camera 20 to gather people, things etc. under the same scene simultaneously, can the multidimension degree go to judge the decision-making, improve the precision, the image information that both gathered can pass through AI chip 31 high-efficient, quick analysis, calculation, thereby can obtain more object features, can be extensive be applied to in the object feature scene that needs the accuracy. The utility model discloses can be based on the human behavior recognition detection algorithm of depth image's 3D and people recognition algorithm comprehensive application, can find out under dynamic scene with the artificial dynamic target who mainly takes precautions against the object to with its facial expression, the limbs action turns into structured data, and reports to the police according to the alarm rule that sets up (including the difficult point that complicated multidimension degree was used such as fighting, robbing, wandering, people discernment, wear gauze mask and other regulation behaviors).
Referring to fig. 1 and 2, in an embodiment, a deep learning accelerator 311, an image processor 312 and a signal processor 313 are integrated inside the AI chip 31, and the signal processor 313 is connected to the image processor 312 and the deep learning accelerator 311 respectively; wherein, the deep learning accelerator 311 is configured to process the three-dimensional video image information; the image processor 312 is configured to process the processed three-dimensional video image information and process the two-dimensional image information to obtain feature information of a target object; the signal processor 313 outputs the characteristic information to an external terminal.
The algorithm code stored in the storage unit 33 includes a pre-trained AI neural network model, the three-dimensional video image information output by the depth camera 10 enters the deep learning accelerator 311 first, at this time, the signal processor 313 requests the storage unit 33 for the AI neural network model to be sent to the memory unit 34, and after the AI neural network model enters the memory unit 34, the deep learning accelerator 311 operates the AI neural network model to process the three-dimensional video image information. The AI neural network model includes the solidified data parameters, the deep learning accelerator 311 compares the three-dimensional video image information with the data parameters, filters the three-dimensional video image information like a filter, and transmits the three-dimensional video image information conforming to the data parameters to the image processor 312. The image processor 312 performs image processing on the filtered three-dimensional video image information to obtain three-dimensional feature point coordinates of the target object, and then the image processor 312 superimposes the three-dimensional feature point coordinates into the two-dimensional video image information to obtain feature information of the target object, and then transmits the feature information to an external terminal.
The application object of the characteristic detection device is related to the AI neural network model, and the corresponding AI neural network model can be burnt according to the application object. For example, when the target object is a cow, the AI neural network model related to the cow is burned, and when the target object is a human, the AI neural network model related to the human is burned, which is only an example, and the application objects include but are not limited to human and cow.
The characteristic detection device has wide application occasions, for example, the characteristic detection device can be applied in a classroom and can judge the satisfaction degree of students on courses through the facial expressions of the students in class; the system can be applied to occasions where people are dense, such as stations, subways or street intersections, and the like, and can judge whether bad behaviors such as fighting a frame, robbery and the like occur or not through the actions of the people, and can set an alarm rule to enable the characteristic detection device to give an alarm when detecting the bad behaviors, and can also carry out crowd density early warning through the statistics of people; meanwhile, the method can also be applied to farms such as cattle, sheep, chickens, ducks and the like, for example, the method obtains and analyzes the volume of the cattle within a certain period of time through an AI neural network model, and judges the growth condition of the cattle according to the volume change within the period of time, or simply judges the health condition of the cattle through some actions of whether the cattle is close to a feeding area or a mouth. This is merely exemplary and applications include, but are not limited to, the above. And above-mentioned application relies on this characteristic detection device can realize, need not to give the high in the clouds with data and handles, provides faster response for the user, has promoted the treatment effeciency, has alleviateed the load in the high in the clouds.
In an embodiment, a visual accelerator 314 is further integrated inside the AI chip 31, the visual accelerator 314 is connected to the image processor 312 and the deep learning accelerator 311, the visual accelerator 314 is connected to the depth camera 10 and the flat panel camera 20, and the visual accelerator 314 is configured to decode and restore three-dimensional video image information output by the depth camera 10 to three-dimensional data image information and decode and restore two-dimensional video image information output by the flat panel camera 20 to two-dimensional data image information.
It can be understood that the resolution of the three-dimensional video image information output by the depth camera 10 and the two-dimensional video image information output by the plane camera 20 is generally relatively large, which is not favorable for the processing work of the deep learning accelerator 311. For this purpose, the visual accelerator 314 is used to perform a hard decoding process on the three-dimensional video image information and the two-dimensional video image information, convert the three-dimensional video image information into three-dimensional data image information, convert the two-dimensional video image information into two-dimensional data image information, and then deliver the three-dimensional data image information to the deep learning accelerator for processing, and deliver the two-dimensional data image information to the image processor. The subsequent work of the AI chip 31 is facilitated, and decoding is performed by the visual accelerator 314, avoiding occupying resources of the signal processor 313.
Referring to fig. 1 and 3, in an embodiment, the feature detection apparatus further includes a power management circuit 32, the power management circuit 32 includes a power input terminal 321, a battery 322, and a power switching circuit 323, a first input terminal of the power switching circuit 323 is connected to the power input terminal 321, a second input terminal of the power switching circuit 323 is connected to the battery 322, and an output terminal of the power switching circuit 323 is connected to the AI chip 31, the depth camera 10, and the flat panel camera 20.
The controlled terminal of the power switching circuit 323 is triggered based on the voltage received at the power input 321, so as to select the voltage received at the power input 321 or the voltage output by the battery 322.
Specifically, when a voltage is input at the power input terminal 321, the power switching circuit 323 controls the power input terminal 321 to communicate with the AI chip 31, the depth camera 10, and the flat panel camera 20, and at this time, the power accessed at the power input terminal 321 is output to the AI chip 31, the depth camera 10, and the flat panel camera 20 through the power switching circuit 323. In this process, the battery 322 is electrically disconnected from the AI chip 31, the depth camera 10, and the flat panel camera 20, that is, only the power source connected to the power source input terminal 321 supplies power to the subsequent circuit.
When no voltage is input to the power input terminal 321, the power switching circuit 323 controls the battery 322 to communicate with the AI chip 31, the depth camera 10, and the flat panel camera 20, and at this time, the battery 322 is output to the AI chip 31, the depth camera 10, and the flat panel camera 20 through the power switching circuit 323, so as to supply power to the circuit modules in the feature detection apparatus.
When the power input end 321 recovers voltage input, the power switching circuit 323 controls the power supply paths of the battery 322 and the AI chip 31, the depth camera 10 and the planar camera 20 to be disconnected, and the power switching circuit 323 controls the power input end 321 to be communicated with the AI chip 31, the depth camera 10 and the planar camera 20, that is, power supply of the power input end 321 is recovered.
It is understood that, in order to avoid supplying power to the battery 322 for a long time, the priority of the power supply input terminal 321 accessing voltage may be set to be higher than the power supply mode of the battery 322, that is, the power supply switching circuit 323 may set the power supply input terminal accessing voltage to be the default mode when the feature detection apparatus is in operation. The battery 322 is selected to supply power once no voltage output from the power input 321 is detected. Therefore, the power failure and shutdown of the feature detection device caused by no voltage output from the power input terminal 321 can be avoided, and even the data loss of the device caused by serious power failure and shutdown can be avoided. The embodiment adopts dual-channel power supply, so that the problem that the battery 322 is used for a long time and the service life is easily shortened due to the fact that the characteristic detection device needs to depend on the battery 322 for power supply, or the characteristic detection device cannot work normally due to the fact that the power input end 321 has no voltage.
In one embodiment, the power switching circuit 323 includes a first switch circuit 3231 and a second switch circuit 3232, an input terminal of the first switch circuit 3231 is connected to the power input terminal 321, an output terminal of the first switch circuit 3231 is an output terminal of the power switching circuit 323, an input terminal of the second switch circuit 3232 is connected to the battery 322, and an output terminal of the second switch circuit 3232 is connected to an output terminal of the first switch circuit 3231.
When a voltage is input to the power input terminal 321, the first switch circuit 3231 operates, the second switch circuit 3232 does not operate, and at this time, the power input to the power input terminal 321 is output to the AI chip 31, the depth camera 10, and the flat panel camera 20 through the first switch circuit 3231, and the battery 322 is electrically disconnected from the AI chip 31, the depth camera 10, and the distortion-free flat panel camera 20.
When the power input terminal 321 has no voltage input, the first switch circuit 3231 does not operate, the second switch circuit 3232 operates, and at this time, the battery 322 is output to the AI chip 31, the depth camera 10, and the flat camera 20 through the second switch circuit 3232 to supply power to them.
When the power input terminal 321 resumes the voltage input, the first switch circuit 3231 operates, the second switch circuit 3232 stops operating, the battery 322 is disconnected from the AI chip 31, the depth camera 10, and the flat panel camera 20, and the first switch circuit 3231 controls the power input terminal 321 to communicate with the AI chip 31, the depth camera 10, and the flat panel camera 20.
In one embodiment, the first switch circuit 3231 includes a first switch element Q1, a second switch element Q2 and a first resistor R1, the second switch circuit 3232 includes a third switch element Q3 and a second resistor R2, a voltage input terminal of the first switch element Q1 is connected to the power input terminal 321, and a ground terminal; the voltage input end of the second switching element Q2 is connected to the power input end 321, the controlled end is connected to the output end of the first switching element Q1, and the output end of the second switching element Q2 is the output end of the power switching circuit 323; the controlled end of the third switching element Q3 is connected to the output end of the second switching element Q2, the voltage input end is connected to the battery 322, and the ground end is connected to the ground after being connected to the second resistor R2 in series; the first resistor R1 has a first terminal connected to the controlled terminal of the third switching element and a second terminal electrically connected to the output terminal of the first switching element Q1.
The first switching element Q1, the second switching element Q2 and the third switching element Q3 may be MOS transistors or other electronic devices capable of performing a switching function. In this embodiment, the first switching element Q1 is an NMOS transistor, and the second switching element Q2 and the third switching element Q3 are both PMOS transistors. The voltage input terminal, the ground terminal, and the output terminal of the first switching element Q1 may be referred to as the gate, the source, and the drain, respectively; the controlled terminal, the voltage input terminal and the output terminal of the second switching element Q2 may be referred to as a gate, a drain and a source, respectively; the controlled terminal, the voltage input terminal, and the ground terminal of the third switching element Q3 may be referred to as a source, a drain, and a gate, respectively. In this embodiment, the power input 321 is connected to a 12V voltage, the output voltage of the battery 322 is 12V when the battery is fully charged, the first switching element Q1 is turned on when receiving a high level, and is turned off when receiving a low level, and the second switching element Q2 is turned off when receiving a high level, and is turned on when receiving a low level. The third switching element Q3 turns off when receiving a high level and turns on when receiving a low level.
When a voltage is input to the power input terminal 321, the gate of the NMOS tube type first switching element Q1 is turned on by receiving a high level, and outputs a voltage; the voltage is divided by the first resistor R1, the gate level of the second switching element Q2 of the PMOS tube type is higher than the source level, and the second switching element Q2 is in an off state; the source of the third switching element Q3 is clamped at a high level and is turned off, and the battery 322 cannot be discharged.
When the power input terminal 321 has no voltage input, the first switching element Q1 is turned off when receiving a low level voltage; the gate of the third switching element Q3 is pulled low by the second resistor R2, the third switching element Q3 is in a conducting state, the battery 322 forms a discharge loop, and voltage is input to the outside; the voltage of the gate of the second switching element Q2 is pulled high by the first resistor R1 and is in an off state. By controlling the operating states of the first switching element Q1, the second switching element Q2, and the third switching element Q3, switching between the external power source and the battery 322 is achieved while ensuring that the entire circuit has almost no loss.
In an embodiment, the power management circuit 32 further includes a voltage stabilizing circuit 324, an input terminal of the voltage stabilizing circuit 324 is connected to a first terminal of the first resistor R1, and an output terminal of the voltage stabilizing circuit 324 is connected to the AI chip 31, the depth camera 10, and the flat-panel camera 20. The voltage stabilizing circuit 324 comprises a voltage stabilizing chip U1, a first capacitor C1 and a second capacitor C2, wherein the input end of the voltage stabilizing chip U1 is connected with the first end of a first resistor R1, the output end of the voltage stabilizing chip U1 is connected with the second capacitor C2 in series and then is grounded, the first end of the first capacitor C1 is connected with the input end of the voltage stabilizing chip U1, the second end of the first capacitor C1 is grounded, and the ground end of the voltage stabilizing chip U1 is connected with one end of a first capacitor C1. The depth camera 10, the plane camera 20, and the AI chip 31 are all connected to the output terminal of the voltage stabilization chip U1.
The voltage stabilizing circuit 324 converts the input voltage and stably outputs the converted voltage to the operating voltages of the depth camera 10, the flat panel camera 20 and the AI chip 31, and even if the input external voltage suddenly changes due to a series of reasons such as sudden power failure or power on, the voltage stabilizing circuit 324 can keep the output voltage basically unchanged, thereby providing a guarantee for the normal operation of the depth camera 10, the flat panel camera 20 and the AI chip 31. The first capacitor C1 and the second capacitor C2 both function as a filter.
Referring to fig. 1, in an embodiment, the feature detection apparatus further includes a wireless communication module 35 connected to the AI chip 31, and the wireless communication module 35 is configured to enable the AI chip 31 to be in communication connection with an external terminal.
The wireless communication module 35 may be a 2G/3G/4G/5G communication module, specifically a 4G communication module, and as technology advances, the wireless communication module 35 may also be a 6G communication module. In this embodiment, the wireless communication module 35 may be a 5G communication module, and is used for real-time communication to transmit a large amount of real-time data, thereby providing technical support of high speed and low delay.
The feature detection device further includes an ethernet unit 36 connected to the AI chip 31. For some remote places, the 5G communication unit cannot be normally used due to the fact that the network cable is not convenient to pull, real-time communication and data transmission can be achieved through the Ethernet unit 36, and applicability is improved.
The characteristic detection device also comprises a USB interface unit 37, an RS232 interface unit 38 and an MIPI CSI interface unit 39, wherein the USB interface unit 37 is connected with the AI chip 31 and used for expanding peripheral equipment; the RS232 interface unit 38 is connected to the AI chip 31, and is configured to communicate with an external device that also has the RS232 interface unit 38; MIPI CSI interface unit 39 is connected with AI chip 31 for the extension binocular camera.
The utility model discloses still provide an AI edge smart machine, refer to fig. 4 and fig. 5, this AI edge smart machine includes casing 40, heat abstractor 50 and feature detection device, and this feature detection device's concrete structure refers to above-mentioned embodiment, because AI edge smart machine has adopted the whole technical scheme of above-mentioned all embodiments, consequently has all beneficial effects that the technical scheme of above-mentioned embodiment brought at least, and the repeated description is not given here again.
Wherein, the inside cavity of casing 40, casing 40 include shell 41 and the protecting crust 42 with shell 41 joint, shell 41 encloses with protecting crust 42 and closes the confined chamber that holds that forms, the intracavity is held and passes through screw fixed connection with shell 41 to the characteristic detection device holding, perhaps set up the fixed column in shell 41, a groove etc., in order to be fixed in with the characteristic detection device and hold the intracavity, in order to avoid the characteristic detection device to take place relative motion with casing 40, can also be equipped with the through wires hole on the shell 41, the data line, wire such as power cord can realize the electricity through this through wires hole and external equipment and be connected. The depth camera 10 and the plane camera 20 are close to the protective shell 42, and the protective shell 42 is in a transparent state, so that the dustproof and waterproof functions can be achieved under the condition that the shooting of the depth camera 10 and the plane camera 20 is not affected.
A heat dissipation plate 51 is fixed on one surface of the shell 41 far away from the protective shell 42, the heat dissipation plate 51 is in a needle shape, a heat dissipation fan 52 is fixed on one surface of the heat dissipation plate 51 far away from the shell 41, the heat dissipation fan 52 is a turbo centrifugal fan, and a mounting bracket 60 is fixed on one surface of the heat dissipation fan 52 far away from the heat dissipation plate 51.
The one side that shell 41 is close to heating panel 51 is made by metal material, and heating panel 51 absorbs the heat that electric control board, degree of depth camera 10, plane camera 20 etc. produced in the course of the work among the characteristic detection device, and radiator fan 52 dispels the heat to heating panel 51 to the radiating rate of characteristic detection device is improved to thermal giving off with higher speed, thereby avoids the heat dissipation untimely, damages characteristic detection device.
In some embodiments, in order to enable the device to better communicate with the outside, the device is further provided with an antenna unit 70, the heat dissipation plate 51 and the housing 41 are both provided with connecting through holes, and the antenna unit 70 is electrically connected to the wireless communication module 35 through the two connecting through holes.
The above is only the optional embodiment of the present invention, and not the scope of the present invention is limited thereby, all the equivalent structure changes made by the contents of the specification and the drawings are utilized under the inventive concept of the present invention, or the direct/indirect application in other related technical fields is included in the patent protection scope of the present invention.

Claims (10)

1. A feature detection apparatus, comprising:
the depth camera is used for collecting an original picture and outputting three-dimensional video image information;
the plane camera is used for collecting an original picture and outputting two-dimensional video image information;
the electric control assembly comprises an electric control plate and an AI chip arranged on the electric control plate, and the AI chip is electrically connected with the depth camera and the plane camera; and the AI chip processes the three-dimensional video image information and the two-dimensional video image information to convert the three-dimensional video image information and the two-dimensional video image information into characteristic information of a target object and outputs the characteristic information to an external terminal.
2. The feature detection device according to claim 1, wherein a deep learning accelerator, an image processor, and a signal processor are integrated inside the AI chip, and the signal processor is connected to the image processor and the deep learning accelerator, respectively; the deep learning accelerator is used for processing the three-dimensional video image information;
the image processor is used for processing the processed three-dimensional video image information and the two-dimensional video image information to obtain characteristic information of a target object;
and the signal processor outputs the characteristic information to an external terminal.
3. The feature detection apparatus according to claim 2, wherein a visual accelerator is further integrated inside the AI chip, the visual accelerator is connected to the image processor and the deep learning accelerator, the visual accelerator is connected to the depth camera and the plane camera, the visual accelerator is configured to decode and restore three-dimensional video image information output by the depth camera to three-dimensional data image information, and,
and decoding the two-dimensional video image information output by the plane camera to restore the two-dimensional video image information into two-dimensional data image information.
4. The feature detection device of claim 1, further comprising a power management circuit disposed on the electronic control board, the power management circuit comprising:
power input end, battery and power switching circuit, power switching circuit's first input with power input end connects, power switching circuit's second input with the battery is connected, power switching circuit's output with the AI chip the degree of depth camera with the plane camera is connected.
5. The signature detection device as recited in claim 4 wherein the power switching circuit comprises a first switching circuit and a second switching circuit, an input of the first switching circuit being connected to the power input, an output of the first switching circuit being an output of the power switching circuit, an input of the second switching circuit being connected to the battery, an output of the second switching circuit being connected to an output of the first switching circuit.
6. The feature detection apparatus according to claim 5, wherein the first switch circuit includes a first switch element, a second switch element, and a first resistor, the second switch circuit includes a third switch element and a second resistor, a voltage input terminal of the first switch element is connected to the power supply input terminal, and a ground terminal; the voltage input end of the second switch element is connected with the power supply input end, the controlled end of the second switch element is connected with the output end of the first switch element, and the output end of the second switch element is the output end of the power supply switching circuit; the controlled end of the third switching element is connected with the output end of the second switching element, the voltage input end is connected with the battery, and the grounding end is connected with the second resistor in series and then is grounded; the first end of the first resistor is connected with the controlled end of the third switching element, and the second end of the first resistor is electrically connected with the output end of the first switching element.
7. The feature detection device of claim 6, wherein the power management circuit further comprises a voltage regulator circuit, an input terminal of the voltage regulator circuit is connected to the first terminal of the first resistor, and an output terminal of the voltage regulator circuit is connected to the AI chip, the depth camera, and the flat panel camera.
8. The feature detection apparatus according to claim 1, characterized in that the feature detection apparatus further comprises:
and the wireless communication module is arranged on the electric control board and electrically connected with the AI chip, and the wireless communication module is used for realizing the communication connection between the AI chip and an external terminal.
9. AI edge intelligent device, characterized in that it comprises a feature detection apparatus according to any one of claims 1 to 8.
10. The AI edge smart device of claim 9, wherein the AI edge smart device further comprises:
the shell is hollow, an accommodating cavity is formed in the shell, and the characteristic detection device is accommodated in the accommodating cavity;
the heat dissipation device comprises a heat dissipation plate in contact with the shell and a heat dissipation fan connected with the heat dissipation plate, and the heat dissipation plate is located outside the shell.
CN202120185920.0U 2021-01-22 2021-01-22 Feature detection device and AI edge intelligent equipment Active CN215182040U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202120185920.0U CN215182040U (en) 2021-01-22 2021-01-22 Feature detection device and AI edge intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202120185920.0U CN215182040U (en) 2021-01-22 2021-01-22 Feature detection device and AI edge intelligent equipment

Publications (1)

Publication Number Publication Date
CN215182040U true CN215182040U (en) 2021-12-14

Family

ID=79407122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202120185920.0U Active CN215182040U (en) 2021-01-22 2021-01-22 Feature detection device and AI edge intelligent equipment

Country Status (1)

Country Link
CN (1) CN215182040U (en)

Similar Documents

Publication Publication Date Title
CN206214373U (en) Object detection from visual information to blind person, analysis and prompt system for providing
US7633527B2 (en) Attention detection
CN111081375B (en) Early warning method and system for health monitoring
CN107357311A (en) A kind of reconnaissance system with unmanned plane based on mixing control technology
CN109618953A (en) A kind of herding fence and herding method based on wearable device
CN108200337B (en) Photographing processing method, device, terminal and storage medium
CN110673819A (en) Information processing method and electronic equipment
CN112949417A (en) Tumble behavior identification method, equipment and system
CN107341866A (en) Face, the drive recorder of eye recognition are suitable to based on Internet of Things
CN215182040U (en) Feature detection device and AI edge intelligent equipment
CN111240223A (en) Intelligent household control method and related product
CN109656022A (en) A kind of wear-type law-enforcing recorder based on AR technology
CN110263759A (en) Protection against electric shock system, method and apparatus
CN106101655A (en) Intelligent alarm monitor controller based on image procossing
CN108810255A (en) User health is urged to use the method and intelligent mobile terminal of intelligent mobile terminal
CN205883458U (en) Intelligence remote video monitoring system based on computer internet technology
CN112950641B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN114495284A (en) Pig shoal shielding segmentation identification method and device based on example segmentation
CN114387670A (en) Gait recognition method and device based on space-time feature fusion and storage medium
CN116092113A (en) Gesture recognition method, gesture recognition device, electronic equipment and computer readable storage medium
CN206863895U (en) A kind of fire early-warning system under natural environment
CN209299411U (en) A kind of high definition low-light (level) camera shooting intelligent controlling device
CN206712968U (en) A kind of intelligent monitoring device based on number
CN209168035U (en) Visual information converting system
CN213724746U (en) Human stress response trainer

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant