CN111783522B - Object detection system, method, device and equipment - Google Patents

Object detection system, method, device and equipment Download PDF

Info

Publication number
CN111783522B
CN111783522B CN202010426868.3A CN202010426868A CN111783522B CN 111783522 B CN111783522 B CN 111783522B CN 202010426868 A CN202010426868 A CN 202010426868A CN 111783522 B CN111783522 B CN 111783522B
Authority
CN
China
Prior art keywords
images
object detection
light
target area
different wavelengths
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010426868.3A
Other languages
Chinese (zh)
Other versions
CN111783522A (en
Inventor
赵栋
罗克凡
高玉涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010426868.3A priority Critical patent/CN111783522B/en
Publication of CN111783522A publication Critical patent/CN111783522A/en
Application granted granted Critical
Publication of CN111783522B publication Critical patent/CN111783522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an object detection system, method, device and equipment. In the detection system, when a target area is irradiated alternately based on a plurality of monochromatic lights with different wavelengths, objects with different light absorption capacities in the target area can show different optical characteristics. The image acquisition equipment shoots the target area, and can acquire a plurality of images of the target area under the irradiation of monochromatic light with different wavelengths. In many images, the object that has different optical characteristics demonstrates comparatively showing difference, carries out object detection based on these many images, can promote object detection's sensitivity and accuracy effectively, promotes object detection's efficiency.

Description

Object detection system, method, device and equipment
Technical Field
The present application relates to the field of intelligent detection technologies, and in particular, to an object detection system, method, device, and apparatus.
Background
The detection of foreign objects is a critical loop in the operation and maintenance of airport runways. Foreign Objects (FOD), i.e., Foreign matter, Debris or objects that may damage the aircraft. When an airplane taxis, the FOD is easily sucked into an engine, and the engine fails.
In the prior art, radar car scanning is typically used to detect foreign objects on a runway. But this approach is inefficient to detect. Therefore, a new solution is yet to be proposed.
Disclosure of Invention
Aspects of the present disclosure provide an object detection system, method, device and apparatus for improving object detection efficiency.
An embodiment of the present application provides an object detection system, including: the system comprises an illumination device, an image acquisition device and a data processing device; wherein the lighting device is configured to: adopting a plurality of monochromatic lights with different wavelengths to alternately irradiate a target area; the image acquisition device is configured to: shooting the target area and sending a shooting result to the data processing equipment; the data processing device is configured to: acquiring a plurality of images from the shooting result, wherein the plurality of images comprise the target area under the alternate irradiation of the monochromatic light with the different wavelengths; and performing object detection based on the plurality of images to identify a first object contained in the target area.
An embodiment of the present application further provides an object detection method, including: acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by various monochromatic light with different wavelengths alternately; and performing object detection based on the plurality of images to identify a first object contained in the target area.
An embodiment of the present application further provides an object detection device, including: a data acquisition module to: acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by various monochromatic light with different wavelengths alternately; an object detection module to: and performing object detection based on the plurality of images to identify a first object contained in the target area.
An embodiment of the present application further provides a data processing apparatus, including: a memory, a processor, and a communication component; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the object detection method provided by the embodiment of the application is executed.
The embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program can implement the object detection method provided in the embodiment of the present application when executed by a processor.
In the detection system provided by the embodiment of the application, when the target area is alternatively irradiated by monochromatic light based on a plurality of different wavelengths, objects with different light absorption capacities in the target area can show different optical characteristics. The image acquisition equipment shoots the target area, and can acquire a plurality of images of the target area under the irradiation of monochromatic light with different wavelengths. In many images, the object that has different optical characteristics demonstrates comparatively showing difference, carries out object detection based on these many images, can promote object detection's sensitivity and accuracy effectively, promotes object detection's efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of an object detection system according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic view of an integrated arrangement of an illumination device and an image capturing device according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of an object detection system according to another exemplary embodiment of the present application;
FIG. 4 is a schematic flow chart of an object detection method according to an exemplary embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an object detection method according to an embodiment of an application scenario of the present application;
FIG. 6 is a schematic structural diagram of an object detection apparatus according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a data processing apparatus according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
A commonly used method for detecting foreign matters invading on an apron is realized based on radar scanning. In this method, foreign matter scanning is generally performed by a worker driving a radar vehicle. However, this type of patrol work requires high labor costs, and has a limited work range for radar scanning, and therefore, the foreign object detection reaction is delayed, and it is not possible to efficiently detect the foreign object. Meanwhile, the radar car itself also forms a foreign body which invades into the apron. In view of the above technical problems, in some embodiments of the present application, a solution is provided, and the technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of an object detection system according to an exemplary embodiment of the present application, and as shown in fig. 1, the object detection system 100 includes: an illumination device 10, an image acquisition device 20 and a data processing device 30.
Based on the Object Detection system 100, an efficient Object Detection (Object Detection) operation can be achieved. The object detection means that an object in a picture or a video stream is perceived and analyzed through a computer, the position of the detected object is marked by a bounding box, and the category of the object is given. The following is an exemplary description of the object detection system 100.
In the object detection system 100, the illumination apparatus 10 is mainly used for: emitting a plurality of monochromatic lights with different wavelengths to alternately irradiate the target area. The alternate irradiation of the monochromatic light with multiple different wavelengths means that the lighting device 10 only emits light with one wavelength at a time, and after the light with one wavelength is irradiated for a certain period of time, the light with one wavelength is switched to light with other wavelengths, so that the alternate irradiation effect of the monochromatic light with multiple different wavelengths is presented.
The target area may include any area that needs to be detected. Under different scenes, the target area is realized in different forms. For example, in the traffic field, the target area may be implemented as at least one of an airstrip, an apron, a railroad track, an expressway, and an unobstructed road. For another example, in the field of sports, the target area may be implemented as at least one of a playground, an athletic field, and a court of a field. Of course, in other fields, the target area may also be implemented in other possible forms, and the embodiment is not limited.
The number of the lighting devices 10 may be one or more, and may be specifically determined according to the area of the target region, which is not limited in this embodiment. When the area of the target area is large, a plurality of lighting apparatuses 10 may be disposed such that the irradiation range of the plurality of lighting apparatuses 10 covers the target area. In some scenes, the existing lighting facilities in the target area can be modified to emit monochromatic light with various colors alternating, so that the existing lighting facilities can be reused, and the hardware cost of the object detection system 100 is reduced.
The object in the target area is various and different objects are made of different materials, so that different objects have different absorption capacities for monochromatic light with different wavelengths. When the object is irradiated by the monochromatic light with different wavelengths, the optical characteristics of the object under the irradiation of the monochromatic light with different wavelengths can be obtained. The object detection is carried out based on the difference of the optical characteristics of the object under the irradiation of different monochromatic light, so that the sensitivity of the detection algorithm and the accuracy of the detection result can be further improved.
In the object detection system 100, the image pickup device 20 is mainly used for: when the lighting apparatus 10 illuminates a target area, the target area is photographed, and the photographing result is transmitted to the data processing apparatus 30.
The image capturing Device 20 may be implemented as an electronic Device that performs imaging based on a CCD (Charge-coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor, such as a high-speed camera, a rotary camera, or the like. In some scenarios, a surveillance camera installed in the target area may be reused as the image capture device 20 to reduce hardware costs of the object detection system 100, and the like.
In the object detection system 100, after the data processing device 30 receives the shooting result sent by the image acquisition device 20, a plurality of images can be obtained from the shooting result; wherein, the images comprise target areas irradiated by various monochromatic lights with different wavelengths alternately. That is, each of the plurality of images corresponds to monochromatic light of one wavelength, and different images are captured under irradiation of monochromatic light of different wavelengths. Furthermore, the multiple images can reflect the difference of the optical characteristics of the same object in the target area under the irradiation of different monochromatic light, thereby being beneficial to identifying the same object and distinguishing different objects.
Next, the data processing device 30 may perform object detection based on the plurality of images, and further, identify an object included in the target area. For convenience of description and distinction, in the following embodiments, the object detected by the data processing device 30 is described as the first object. The "first" is not intended to limit information such as the number, order, rank, and position of the objects. The first object may comprise one object, or comprise a plurality of the same objects, or comprise a plurality of different objects, depending on the specific detection result.
In different fields, objects detected in a target area are different. In the traffic field, objects that may affect traffic safety may be detected from a target area. For example, for airport operations, foreign objects that pose a safety threat to taxiing of an aircraft may be detected from aircraft runways, parking ramps, including but not limited to: aircraft and engine connections such as nuts, screws, washers, fuses, etc.; mechanical tools, such as wrenches, pliers, etc.; flying objects such as personal certificates, pens, pencils, etc. left by passengers; natural obstacles such as wild animals, leaves, stones, sand, ice ballast, and the like; airport construction materials such as pavement materials, wood blocks, plastic or polyethylene materials, paper products, and the like.
For another example, in the field of sports, objects that may interfere with sports or pose a threat to the safety of athletes may be detected from the target area. For example, objects that affect the quality and safety of track and field sports, such as stones, sand, metal nails, and mirror objects, can be detected from the track and field, and are not described again.
In some scenarios, the data processing device 30 may be implemented by a computer or server device; if the server is implemented, the implementation form may be a conventional server, a cloud host, a virtual center, or other server devices. The server device mainly comprises a processor, a hard disk, a memory, a system bus and the like, and a general computer architecture type.
In other scenarios, the data processing device 30 may be implemented using various Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), micro-central control elements, microprocessors or other electronic elements, which are not limited in this embodiment.
In the object detection system 100, the image capturing device 20 and the data processing device 30 may be connected in communication based on a wired or wireless manner. The WIreless communication mode may include short-distance communication modes such as bluetooth, ZigBee, infrared, WiFi (WIreless Fidelity), etc., long-distance WIreless communication modes such as LORA, etc., and WIreless communication mode based on a mobile network. When the mobile network is connected through a mobile network communication, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like, which is not limited in this embodiment.
In this embodiment, when the target area is alternately irradiated with the monochromatic light based on a plurality of different wavelengths, objects with different light absorption capacities in the target area can exhibit different optical characteristics. The image acquisition equipment shoots the target area and can acquire a plurality of images of the target area under the alternate irradiation of monochromatic light with different wavelengths. In many images, the object that has different optical characteristics demonstrates comparatively showing difference, carries out object detection based on these many images, can promote object detection's sensitivity and accuracy effectively, promotes object detection's efficiency.
In some alternative embodiments, the illumination device 10 may be implemented as one or more multi-wavelength light sources. Wherein, each multi-wavelength light source can be formed based on a composite light source and various filters. The lighting device 10 may filter the composite light source by replacing the optical filter to alternately emit monochromatic lights with different wavelengths, which is not described in detail.
In other alternative embodiments, the illumination device 10 may be implemented as one or more sets of light sources, each set consisting of a plurality of monochromatic light sources of different wavelengths. The plurality of monochromatic light sources in each group of light sources can alternately emit light under the control of the control unit, so that the effect of irradiating a target area by the monochromatic light with different wavelengths is realized. Optionally, when the lighting device 10 includes multiple sets of light sources, the multiple sets of light sources may be integrally mounted on one device, or may be dispersedly mounted on different devices, which is not limited in this embodiment.
Alternatively, the plurality of monochromatic lights of different wavelengths emitted by the lighting device 10 may include: at least one of infrared light, red light, yellow light, green light, and ultraviolet light.
Accordingly, when the lighting device 10 is implemented as one or more multi-wavelength light sources, the optical filters corresponding to infrared light, red light, yellow light, green light, and ultraviolet light may be selected according to the requirement for monochromatic light, and the multiple optical filters and the composite light source are combined to obtain a multi-wavelength light source.
When the lighting device 10 is implemented as one or more groups of light sources, monochromatic light sources of corresponding emission wavelengths may be selected for each group of light sources, depending on the need for monochromatic light. For example, depending on the requirements for infrared, red, yellow, green and ultraviolet light, the following monochromatic light sources may be selected: a light source 1 with the light-emitting wavelength of more than 760nm, a light source 2 with the light-emitting wavelength of 605nm, a light source 3 with the light-emitting wavelength of 580 nm-595 nm, a light source 4 with the light-emitting wavelength of 500 nm-560 nm and a light source 5 with the light-emitting wavelength of less than 400 nm. Of course, the above-mentioned light emitting wavelength is only used for exemplary illustration of the plurality of light sources, and in some other alternative embodiments, light sources with other wavelengths may also be used, and the embodiments are not limited.
Alternatively, the illumination device 10 and the image capture device 20 may be integrally provided, as shown in fig. 2. In this case, it is ensured that the field of view of the image pickup device 20 is covered as much as possible by the illumination range of the illumination device 10, and further, a good monochromatic light illumination effect can be exhibited on the captured image.
In some alternative embodiments, the lighting device 10 may sequentially emit monochromatic lights with different wavelengths according to a set alternating interval. The alternate interval is a time interval required for alternately irradiating adjacent monochromatic light. For example, it may be 10 seconds, 20 seconds, 1 minute, or the like. The alternate interval is the irradiation time of each monochromatic light. In the process of alternate irradiation, a plurality of monochromatic lights are alternately irradiated according to a set sequence, and after the previous monochromatic light is irradiated for a certain time, the irradiation is switched to the next monochromatic light.
In some alternative embodiments, when the lighting device 20 illuminates the target area, the image capturing device 20 may take a video shot and send the shot video to the data processing device 30. In this way, a continuous video stream can be acquired, avoiding the loss of partial images.
In other alternative embodiments, image capture device 20 may take multiple image shots while illumination device 20 is illuminating the target area. In this scenario, image capture device 20 may capture the target area at a capture interval that is adapted to the alternation interval, resulting in a discrete image. Wherein, the shooting interval is adapted to the alternate interval, which may include: the shooting interval is the same as the alternation interval or is smaller than the alternation interval, and thus, a target region irradiated by each monochromatic light can be surely shot in the process of irradiation of the monochromatic light. Compared with videos, the mode of shooting images avoids the generation of more shooting result data, on one hand, the storage pressure of the image acquisition equipment 20 is reduced, on the other hand, discrete images are transmitted to the data processing equipment 20, and the data transmission efficiency between the image acquisition equipment 20 and the data processing equipment 30 is effectively improved.
In some alternative embodiments, the data processing device 30 performs the object detection operation according to multiple target images, and may be implemented based on a machine learning algorithm. Optionally, the machine Learning algorithm may include an XGBoost algorithm, a logistic regression algorithm, a support Vector machine svm (support Vector machine) algorithm, a Deep Learning algorithm (Deep Learning), a naive bayes algorithm, and the like, which includes but is not limited to this embodiment.
The following examples will exemplify alternative implementations of object detection using a deep learning algorithm as an example. The deep learning algorithm is an algorithm for performing characterization learning on data by taking an artificial Neural Network (NN) as a framework. Deep learning may use multiple processing layers including complex structures or consisting of multiple nonlinear transformations to perform high-level abstraction on data and perform object detection based on features obtained from the high-level abstraction.
Before object detection based on the neural network model, the neural network model may be trained so that the neural network learns characteristics of the object and outputs a position and a type of the object based on the characteristics of the object.
In the training stage of the neural network model, optionally, a plurality of sample images corresponding to the object may be obtained, and the true values of the object are labeled in the plurality of sample images. The truth values may include a position truth value and a type truth value. For convenience of description and distinction, the following embodiments describe an object contained on the sample image as the second object. The multi-sample image is obtained by shooting under alternate irradiation of monochromatic light with different wavelengths. The plurality of monochromatic lights of different wavelengths are the same or similar to the plurality of monochromatic lights provided by the illumination device 10.
Then, the sample image is input into the neural network model, and the second object is used as a supervision signal to train the object detection capability of the neural network model. In the training process, optionally, inside the neural network model, the plurality of processing layers may process the plurality of sample images to extract image features included in each of the plurality of sample images, and fuse the image features extracted from the plurality of sample images. Then, according to the features obtained by fusion, the probability that the object belongs to different object types is calculated. And predicting the types of the objects contained in the plurality of sample images based on the calculated probability. Then, the difference between the predicted type and the supervisory signal (i.e. the type true value of the second object) may be calculated and a loss function is constructed based on the difference. Next, model parameters for each layer of the neural network may be iteratively optimized to minimize the loss function. When the loss function converges to a specific value, the detection result of the neural network model is considered to be closer to the actual value, and the detection performance is more excellent.
It should be noted that, in order to expand the object detection range of the neural network model, a sample including a plurality of different types of second objects may be selected for model training, so that the neural network model learns to detect different objects, which is not described in detail.
In object detection based on the trained neural network model, the data processing device 30 may optionally input the acquired plurality of images into the neural network model. And extracting respective image characteristics of the plurality of images based on a part of optimized model parameters in the neural network model, and fusing the respective image characteristics of the plurality of images to obtain fused characteristics. Then, based on the other part of model parameters after the neural network model optimization, the objects contained in the plurality of images are identified according to the fusion characteristics.
Optionally, the Neural network model (NN) may be implemented as: one or more of Convolutional Neural Networks (CNN), Deep Neural Networks (DNN), Graph Convolutional Neural Networks (GCN), Recurrent Neural Networks (RNN), and Long-Short Term Memory Neural Networks (LSTM), or may be obtained by deforming one or more of the above Neural Networks, which is not limited in this embodiment.
Optionally, the detection result output by the neural network model may include a position of the first object in the target region and/or a type of the first object. The position of the first object in the target area can be displayed by using a detection frame (surrounding frame). The type of the first object may include the material, category, etc. of the first object. For example, the types of the first object may include: the first object is a wood block, the first object is an animal, the first object is a metal nail, and the like, which is not limited in this embodiment.
The data processing device 30 may further calculate the number of the first objects included in the target region based on the output result of the neural network model, or may also calculate the size of the first object, and the like, which is not described again.
In some optional embodiments, after the data processing device 30 detects the first object included in the target area, information of the first object may be output. Wherein the information of the first object includes: at least one of a position of the first object in the target area, a type of the first object, a number of the first objects, and a size of the first object.
Optionally, in some scenarios, as shown in fig. 3, the object detection system 100 further includes a terminal device 40. Alternatively, after acquiring the information of the first object, the data processing device 30 may send the information of the first object to the terminal device 40, so as to present the information to the user. The terminal device may be implemented as a mobile phone, a computer, a tablet computer, an intelligent wearable device, and the like on the user side, which includes but is not limited to this embodiment.
In some optional embodiments, after acquiring the information of the first object, the data processing device 30 may further determine whether the first object is an abnormal object according to a set foreign object identification rule. Alternatively, the foreign object identification rules may be personalized by the user for use by the data processing device 30. Under different scenes, the foreign matter identification rule is different and depends on specific requirements. If the first object is determined to be an abnormal object, the data processing server 30 may trigger an alarm event and output an alarm message according to an alarm policy corresponding to the alarm event. Wherein, the alarm strategy can comprise: sending a message to the terminal device 30, making a call, or sending a control instruction for sounding an alarm to an alarm, and the like, which is not limited in this embodiment.
Besides, it should be noted that the object detection system 100 provided in the embodiment of the present application may select the lighting devices 10 with a larger lighting range, or increase the number of the lighting devices 10 to increase the lighting area. Meanwhile, the image acquisition device 20 with a large field of view can be selected, and in some scenes, a camera rotating 360 degrees can be selected for panoramic scanning. Furthermore, the object detection can be carried out on a region with a large area, and the defect that the operation range of radar tour scanning is limited is overcome. Meanwhile, manual participation is not needed, and the labor cost of object detection is reduced. In addition, in some scenarios, the lighting device 10 may be obtained by modifying an original lighting facility in the target area, and the image capturing device 20 may reuse the original monitoring facility in the target area, so that on one hand, the hardware cost of the object detection system is reduced, and on the other hand, no new intrusion is caused to the target area.
In addition to the object detection system described in the above embodiments, an embodiment of the present application further provides an object detection method, as shown in fig. 4, the object detection method including:
step 401, acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by a plurality of monochromatic lights with different wavelengths alternately.
Step 402, performing object detection based on the plurality of images to identify a first object contained in the target area.
In some exemplary embodiments, the method further comprises: outputting information of the first object; the information of the first object includes: at least one of a location of the first object in the target area, a type of the first object, a number of the first objects, and a size of the first object.
In some exemplary embodiments, the method further comprises: judging whether the first object is an abnormal object or not according to a set foreign matter identification rule; if yes, triggering an alarm event, and outputting an alarm message according to an alarm strategy corresponding to the alarm event.
In some exemplary embodiments, a manner of object detection based on the plurality of images includes: inputting the plurality of images into a neural network model; extracting respective image features of the plurality of images in the neural network model; fusing the image characteristics of the multiple images to obtain fusion characteristics; and identifying objects contained in the multiple images according to the fusion characteristics.
In some exemplary embodiments, the method further comprises: acquiring a plurality of sample images containing a second object, wherein the plurality of sample images are obtained by shooting under the irradiation of a plurality of monochromatic light with different wavelengths; inputting the plurality of sample images into the neural network model; and training the object detection capability of the neural network model by using the second object in the multiple sample images as a supervision signal.
In some exemplary embodiments, the plurality of monochromatic lights of different wavelengths includes: at least one of infrared light, red light, yellow light, green light, and ultraviolet light.
In some exemplary embodiments, the target region includes: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
In the present embodiment, the plurality of images for object detection include target regions irradiated with a plurality of monochromatic lights having different wavelengths alternately. Furthermore, objects with different light absorption capabilities located within the target area may exhibit different optical characteristics in the plurality of images. Based on the object detection of these many images, can promote the sensitivity and the accuracy that object detection was detected effectively, promote object detection's efficiency.
In some of the flows described in the above embodiments and in the figures, a number of actions occurring in a particular order are included, but it should be clearly understood that these actions may be performed out of order or in parallel as they appear herein, with the order of the actions being 401, 402, etc. merely to distinguish between the various actions, and the order of execution does not in itself dictate any order of execution.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical scheme provided by each embodiment of the application can be applied to various application scenes, such as: the method comprises the following steps of detecting foreign matters outside an airport runway/parking apron, detecting foreign matters outside a highway, detecting foreign matters outside a sports ground, detecting foreign matters in a production workshop, detecting invasion of railway foreign matters, detecting dangerous objects in a specific place and the like. Some of the scenarios are exemplified below.
In one scenario, to detect foreign objects on an airport runway, lighting devices and cameras of a multi-wavelength light source may be deployed on both sides of the runway. The lighting device is controlled to emit a plurality of monochromatic lights with different wavelengths, so that the plurality of monochromatic lights alternately irradiate the airport runways. And adjusting the shooting angle of the camera so that the airport runway irradiated by the monochromatic light is positioned in the field range of the camera. And in the process that the airfield runway is alternatively irradiated by the various monochromatic lights, the camera carries out video acquisition to obtain video data. Then, the video data is transmitted to the server.
As shown in fig. 5, after receiving the video data, the server decodes the video data to obtain an image collected under the irradiation of the plurality of monochromatic lights. For example, when the airport runway is alternately illuminated by monochromatic light such as infrared light, red light, yellow light, green light, ultraviolet light, an infrared light picture, a red light picture, a yellow light picture, a green light picture and an ultraviolet light picture can be decoded from the video data. Then, the pictures are input into a neural network model for foreign object recognition (i.e. FOD recognition). In the neural network model, image features in the multi-light-source picture are fused, and objects on the airport runway are identified according to the fused features. If the object on the runway of the airport is judged to be an abnormal object according to the identification result, an alarm message can be sent to the terminal device or the alarm device of the airport, so that airport operation and maintenance personnel can clean the abnormal object in time, and safety threat to the airplane sliding process is avoided.
In another scene, the street lamps on the highway can be modified, so that the street lamps can be used for polling and emitting monochromatic light with different wavelengths. Meanwhile, a monitoring camera distributed and controlled above the highway is adopted to shoot the road surface of the highway. The monitoring camera can send the shot video data to the server in real time.
And after receiving the video data, the server decodes the video data to obtain the image of the highway pavement under the irradiation of the various monochromatic lights. For example, when the street light polls to emit red light, yellow light, blue light, purple light, green light, and the like in a single color, the server may decode a red light picture, a yellow light picture, a blue light picture, a purple light picture, and a green light picture from the video data. And then, the server inputs the picture into a neural network model to carry out foreign matter identification. In the neural network model, image features in the multi-light-source picture are fused, and objects on the highway pavement are identified according to the fused features. If the objects on the expressway are judged to be abnormal objects (namely, objects causing obstacles or dangers to high-speed driving of vehicles, such as nails, glass slag, transportation lost objects, vehicle parts and the like) according to the identification result, an alarm message can be sent to terminal equipment or alarm equipment of a road management department, so that road maintenance personnel can clean the abnormal objects in time and avoid traffic dangers.
In yet another scenario, the multi-wavelength lighting device may be deployed on a production facility pipeline such that the multi-wavelength lighting device polls for emission of multiple monochromatic lights to illuminate production devices on the pipeline. Meanwhile, the camera is adopted to shoot the production equipment under the illumination, and shot video data are sent to the server in real time.
And after receiving the video data, the server decodes the video data to obtain an image of the production equipment under the irradiation of the various monochromatic lights, namely a multi-light-source picture. And then, the server inputs the picture into a neural network model to carry out foreign matter identification. In the neural network model, image features in the multi-light-source picture are fused, and an object on the production equipment and an object on the production line are identified according to the fused features. If the identified object is judged to possibly damage the production equipment according to the identification result, an alarm message can be sent to the appointed terminal equipment or the staff of the factory so as to avoid damaging the production benefits.
An embodiment of the present application further provides an object detection apparatus, as shown in fig. 6, the object detection apparatus includes:
a data obtaining module 601, configured to: acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by monochromatic light with different wavelengths.
An object detection module 602 to: and performing object detection based on the plurality of images to identify a first object contained in the target area.
Further optionally, as shown in fig. 6, the apparatus further includes: the output module 603 is specifically configured to: outputting information of the first object; the information of the first object includes: at least one of a location of the first object in the target area, a type of the first object, a number of the first objects, and a size of the first object.
Further optionally, as shown in fig. 6, the apparatus further includes an early warning module 604: judging whether the first object is an abnormal object or not according to a set foreign matter identification rule; if yes, triggering an alarm event, and outputting an alarm message according to an alarm strategy corresponding to the alarm event.
Further optionally, when the object detection module 602 performs object detection based on the multiple images, it is specifically configured to: inputting the plurality of images into a neural network model; extracting respective image features of the plurality of images in the neural network model; fusing the image characteristics of the multiple images to obtain fusion characteristics; and identifying objects contained in the multiple images according to the fusion characteristics.
Further optionally, as shown in fig. 6, the apparatus further includes a model training module 604, specifically configured to: acquiring a plurality of sample images containing a second object, wherein the plurality of sample images are obtained by shooting under the irradiation of a plurality of monochromatic light with different wavelengths; inputting the plurality of sample images into the neural network model; and training the object detection capability of the neural network model by using the second object in the plurality of sample images as a supervision signal.
Further optionally, the plurality of monochromatic lights of different wavelengths comprises: at least one of infrared light, red light, yellow light, green light, and ultraviolet light.
Further optionally, the target region comprises: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
In the present embodiment, the plurality of images for object detection include target regions irradiated with a plurality of monochromatic lights having different wavelengths alternately. Furthermore, objects of different light absorption capacity located within the target area in the plurality of images may exhibit different optical characteristics. Based on the object detection of these many images, can promote the sensitivity and the accuracy that object detection was detected effectively, promote object detection's efficiency.
Fig. 7 is a schematic structural diagram of a data processing apparatus according to an exemplary embodiment of the present application, and as shown in fig. 7, the data processing apparatus includes: a memory 701 and a processor 702.
A memory 701 for storing computer programs and may be configured to store various other data to support operations on the data processing apparatus. Examples of such data include instructions for any application or method operating on the data processing device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 701 may be implemented by any type or combination of volatile and non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 702, coupled to the memory 701, for executing the computer program in the memory 701 for: acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by monochromatic light with different wavelengths; and performing object detection based on the plurality of images to identify a first object contained in the target area.
Further optionally, the processor 702 is further configured to: outputting information of the first object; the information of the first object includes: at least one of a location of the first object in the target area, a type of the first object, a number of the first objects, and a size of the first object.
Further optionally, the processor 702 is further configured to: judging whether the first object is an abnormal object or not according to a set foreign matter identification rule; if yes, triggering an alarm event, and outputting an alarm message according to an alarm strategy corresponding to the alarm event.
Further optionally, the processor 702, when performing the object detection based on the plurality of images, is specifically configured to: inputting the plurality of images into a neural network model; extracting respective image features of the plurality of images in the neural network model; fusing the image characteristics of the multiple images to obtain fusion characteristics; and identifying objects contained in the multiple images according to the fusion characteristics.
Further optionally, the processor 702 is further configured to: acquiring a plurality of sample images containing a second object, wherein the plurality of sample images are obtained by shooting under the irradiation of a plurality of monochromatic light with different wavelengths; inputting the plurality of sample images into the neural network model; and training the object detection capability of the neural network model by using the second object in the multiple sample images as a supervision signal.
Further optionally, the plurality of monochromatic lights of different wavelengths includes: at least one of infrared light, red light, yellow light, green light, and ultraviolet light.
Further optionally, the target region comprises: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
Further, as shown in fig. 7, the data processing apparatus further includes: communication component 703, display component 704, power component 705, audio component 706, and other components. Only some of the components are schematically shown in fig. 7, and it is not meant that the data processing apparatus comprises only the components shown in fig. 7.
The communication component 703 is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display assembly 704 includes a screen, which may include a liquid crystal display assembly (LCD) and a Touch Panel (TP), among others. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply 705 provides power to various components of the device in which the power supply is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In this embodiment, the plurality of images for object detection include target regions alternately illuminated by monochromatic light of a plurality of different wavelengths. Furthermore, objects with different light absorption capabilities located within the target area may exhibit different optical characteristics in the plurality of images. Based on the object detection of these many images, can promote the sensitivity and the accuracy that object detection was detected effectively, promote object detection's efficiency.
Accordingly, the present application further provides a computer readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the data processing device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processor to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processor, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processor to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processor to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. An object detection system, comprising:
the system comprises an illumination device, an image acquisition device and a data processing device;
wherein the lighting device is configured to: adopting a plurality of monochromatic lights with different wavelengths to alternately irradiate a target area;
the image acquisition device is configured to: shooting the target area and sending a shooting result to the data processing equipment;
the data processing device is configured to: acquiring a plurality of images from the shooting result, wherein the plurality of images comprise the target area under the alternate irradiation of the monochromatic light with the different wavelengths; performing object detection based on the plurality of images to identify a first object contained in the target area; performing object detection based on the plurality of images, including: adopting a neural network model to detect the object based on the difference of optical characteristics of the object under the irradiation of different monochromatic light; the recognition result of the first object comprises: the material of the first object;
wherein the target region comprises: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
2. The system according to claim 1, characterized in that the lighting device is specifically configured to: emitting the monochromatic light with the plurality of different wavelengths in turn according to the set alternating interval;
the image acquisition device is specifically configured to: and shooting the target area according to the shooting interval adaptive to the alternate interval to obtain the shooting result.
3. The system of claim 1, wherein the illumination device comprises a multi-wavelength light source or a plurality of monochromatic light sources of different wavelengths.
4. The system of claim 3, wherein the illumination device is integral with the image capture device.
5. The system of claim 1, wherein the data processing device is further configured to: judging whether the first object is an abnormal object or not according to a set foreign matter identification rule; if yes, triggering an alarm event, and outputting an alarm message according to an alarm strategy corresponding to the alarm event.
6. The system of claim 1, wherein the plurality of monochromatic lights of different wavelengths comprises: at least one of infrared light, red light, yellow light, green light, and ultraviolet light.
7. An object detection method, comprising:
acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by various monochromatic light with different wavelengths alternately;
performing object detection based on the plurality of images to identify a first object contained in the target area;
wherein performing object detection based on the plurality of images includes: adopting a neural network model to detect the object based on the difference of optical characteristics of the object under the irradiation of different monochromatic light; the recognition result of the first object comprises: the material of the first object;
wherein the target region comprises: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
8. The method of claim 7, further comprising:
outputting information of the first object; the information of the first object includes: at least one of a location of the first object in the target area, a type of the first object, a number of the first objects, and a size of the first object.
9. The method of claim 7, further comprising:
judging whether the first object is an abnormal object or not according to a set foreign matter identification rule;
if yes, triggering an alarm event, and outputting an alarm message according to an alarm strategy corresponding to the alarm event.
10. The method according to any one of claims 7-9, wherein performing object detection based on the plurality of images comprises:
inputting the plurality of images into a neural network model;
extracting respective image features of the plurality of images in the neural network model;
fusing the image characteristics of the multiple images to obtain fusion characteristics;
and identifying objects contained in the multiple images according to the fusion characteristics.
11. The method of claim 10, further comprising:
acquiring a plurality of sample images containing a second object, wherein the plurality of sample images are obtained by shooting under the irradiation of a plurality of monochromatic light with different wavelengths;
inputting the plurality of sample images into the neural network model;
and training the object detection capability of the neural network model by using the second object in the multiple sample images as a supervision signal.
12. An object detection device, comprising:
a data acquisition module to: acquiring a plurality of images, wherein the plurality of images comprise target areas irradiated by various monochromatic light with different wavelengths alternately;
an object detection module to: performing object detection based on the plurality of images to identify a first object contained in the target area; wherein, when the object detection module performs object detection based on the plurality of images, the object detection module is specifically configured to: adopting a neural network model to detect the object based on the difference of the optical characteristics of the object under the irradiation of different monochromatic light; the recognition result of the first object comprises: the material of the first object;
wherein the target region comprises: at least one of an airstrip, an apron, a railroad track, a highway, and an unobstructed road.
13. A data processing apparatus, characterized by comprising: a memory, a processor, and a communication component;
the memory is to store one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the object detection method of any one of claims 7-11.
14. A computer-readable storage medium storing a computer program, wherein the computer program is capable of implementing the object detection method according to any one of claims 7 to 11 when executed by a processor.
CN202010426868.3A 2020-05-19 2020-05-19 Object detection system, method, device and equipment Active CN111783522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010426868.3A CN111783522B (en) 2020-05-19 2020-05-19 Object detection system, method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010426868.3A CN111783522B (en) 2020-05-19 2020-05-19 Object detection system, method, device and equipment

Publications (2)

Publication Number Publication Date
CN111783522A CN111783522A (en) 2020-10-16
CN111783522B true CN111783522B (en) 2022-06-21

Family

ID=72754288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010426868.3A Active CN111783522B (en) 2020-05-19 2020-05-19 Object detection system, method, device and equipment

Country Status (1)

Country Link
CN (1) CN111783522B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2024512583A (en) * 2021-03-24 2024-03-19 日本電気株式会社 Image processing method and image processing device
CN118348029A (en) * 2024-05-09 2024-07-16 山东中清智能科技股份有限公司 Surface defect detection method and device for light-emitting chip

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202770413U (en) * 2012-08-23 2013-03-06 杭州先临三维科技股份有限公司 3D scanner for obtaining colorful image by monochrome camera
JP2013113803A (en) * 2011-11-30 2013-06-10 Sumitomo Electric Ind Ltd Object detection device, and object detection method
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221376A (en) * 2008-01-30 2008-07-16 上海微电子装备有限公司 Color dynamic selection method based on multi-period mark
CN103576159B (en) * 2013-11-14 2016-01-20 中国民用航空总局第二研究所 A kind of runway road surface checking device based on laser scanner technique and method
CN104199119B (en) * 2014-08-20 2017-04-19 华南理工大学 Lamp belt device for detecting foreign matters in gaps between shielded gate at metro platform and train
CN107356983A (en) * 2017-07-19 2017-11-17 中国民航大学 Foreign body detection system for airfield runway and detection method
CN207718001U (en) * 2018-01-02 2018-08-10 上海德运光电技术有限公司 A kind of airfield runway detection device for foreign matter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013113803A (en) * 2011-11-30 2013-06-10 Sumitomo Electric Ind Ltd Object detection device, and object detection method
CN202770413U (en) * 2012-08-23 2013-03-06 杭州先临三维科技股份有限公司 3D scanner for obtaining colorful image by monochrome camera
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks

Also Published As

Publication number Publication date
CN111783522A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
JP7009987B2 (en) Automatic driving system and automatic driving method
US10055649B2 (en) Image enhancements for vehicle imaging systems
Zhang et al. Automated detection of grade-crossing-trespassing near misses based on computer vision analysis of surveillance video data
US8233662B2 (en) Method and system for detecting signal color from a moving video platform
KR101971878B1 (en) Video surveillance system and method using deep-learning based car number recognition technology in multi-lane environment
CN109271921B (en) Intelligent identification method and system for multispectral imaging
CN111783522B (en) Object detection system, method, device and equipment
US10823877B2 (en) Devices, systems, and methods for under vehicle surveillance
JP2024123217A (en) SYSTEM AND METHOD FOR ACQUIRING TRAINING DATA - Patent application
AU2021202430A1 (en) Smart city closed camera photocell and street lamp device
US11721100B2 (en) Automatic air recirculation systems for vehicles
WO2012018109A1 (en) Information management apparatus, data analysis apparatus, signal machine, server, information management system, signal machine control apparatus, and program
US11256926B2 (en) Method and system for analyzing the movement of bodies in a traffic system
US11436839B2 (en) Systems and methods of detecting moving obstacles
US20120242832A1 (en) Vehicle headlight management
Gasparini et al. Anomaly detection, localization and classification for railway inspection
KR102353724B1 (en) Apparatus and method for monitoring city condition
CN116420058A (en) Replacing autonomous vehicle data
Loong et al. Machine vision based smart parking system using Internet of Things
CN110263623A (en) Train climbs monitoring method, device, terminal and storage medium
Santhanam et al. Animal detection for road safety using deep learning
US20100027841A1 (en) Method and system for detecting a signal structure from a moving video platform
US20200142413A1 (en) Configurable illumination on region of interest for autonomous driving
Hasan et al. Simultaneous traffic sign recognition and real-time communication using dual camera in ITS
Sinha et al. IoT and machine learning for traffic monitoring, headlight automation, and self-parking: application of AI in transportation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant