CN114708545A - Image-based object detection method, device, equipment and storage medium - Google Patents

Image-based object detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114708545A
CN114708545A CN202210272123.5A CN202210272123A CN114708545A CN 114708545 A CN114708545 A CN 114708545A CN 202210272123 A CN202210272123 A CN 202210272123A CN 114708545 A CN114708545 A CN 114708545A
Authority
CN
China
Prior art keywords
target
image
detection
monitoring
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210272123.5A
Other languages
Chinese (zh)
Inventor
邵为中
邵世雷
刘松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANJING C3I ELECTRONIC SYSTEM ENGINEERING Inc
Original Assignee
NANJING C3I ELECTRONIC SYSTEM ENGINEERING Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING C3I ELECTRONIC SYSTEM ENGINEERING Inc filed Critical NANJING C3I ELECTRONIC SYSTEM ENGINEERING Inc
Publication of CN114708545A publication Critical patent/CN114708545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an object detection method, device, equipment and storage medium based on images, wherein the method comprises the following steps: receiving a detection request signal from a user terminal, and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal; inputting the original image into a target recognition model; sending the marked image to a user terminal and requesting the user terminal to feed back the selection of the target detection frame; generating a target image according to the selected target detection frame; selecting a plurality of monitoring terminals for object detection and sending an image request signal according to the positioning data and the target type of the target image; and enabling the object recognition model to output a judgment result of whether the object pointed by the target image exists in the monitored image. The method and the device have the advantages that the monitoring terminals in a certain range are automatically selected for object detection according to the identified target image through identifying the transmitted image of the user terminal, and the detection efficiency and accuracy are improved.

Description

Image-based object detection method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for image-based object detection.
Background
The skynet project is a video monitoring system which is composed of GIS (geographic information system) maps, image acquisition, transmission, control, display and other equipment and control software and is used for carrying out real-time monitoring and information recording on a fixed area in order to meet the requirements of urban public security, prevention and control and urban management. The skynet project is characterized in that video monitoring equipment is installed in a traffic key road, a public security checkpoint, a public gathering place, a hotel, a school, a hospital and a public security complex place, images of all video monitoring points in a certain area are transmitted to a monitoring center (namely a skynet project management platform) through networks such as a video private network, the internet, a mobile network and the like, information of various images is classified, and reliable image data are provided for strengthening comprehensive management and danger prevention of a city.
Relevant departments can monitor main roads, key units and hot spot parts of all urban street districts for 24 hours through the monitoring platform, and public security hidden dangers can be effectively eliminated.
However, the existing skynet project is more passively recorded, and cannot perform targeted detection for some specific objects, and the existing dynamic tracking is often based on the self experience of monitoring management personnel, so that not only is the detection efficiency low, but also real-time actions cannot be effectively matched.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present application provide an image-based object detection method, apparatus, electronic device and computer storage medium to solve the technical problems mentioned in the background section above.
As a first aspect of the present application, some embodiments of the present application provide an image-based object detection method, including: receiving a detection request signal from a user terminal, and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal; inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame; sending the marked image to a user terminal and requesting the user terminal to feed back the selection of the target detection frame; responding to the selection of the target detection frame fed back by the user terminal, and generating a target image according to the selected target detection frame; selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals; and receiving a monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether an object pointed by the target image exists in the monitoring image.
As a second aspect of the present application, some embodiments of the present application provide an image-based object detection apparatus, including: the analysis module is used for receiving the detection request signal from the user terminal and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal; the output module is used for inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame; the request module is used for sending the marked image to the user terminal and requesting the user terminal to feed back the selection of the target detection frame; the generating module is used for responding to the selection of the target detection frame fed back by the user terminal and generating a target image according to the selected target detection frame; the selecting module is used for selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals; and the judging module is used for receiving the monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether an object pointed by the target image exists in the monitoring image.
As a third aspect of the present application, some embodiments of the present application provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
As a fourth aspect of the present application, some embodiments of the present application provide a computer storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The beneficial effect of this application lies in: the monitoring terminals in a certain range are automatically selected for object detection according to the identified target image through identifying the transmitted image of the user terminal, so that the detection efficiency and accuracy are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it.
Further, throughout the drawings, the same or similar reference numerals denote the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
In the drawings:
fig. 1 is a schematic view of an application scenario of an image-based object detection method according to some embodiments of the present application;
FIG. 2 is a flow chart of a method for image-based object detection according to an embodiment of the present application;
FIG. 3 is a flow chart of a portion of the steps of a method for image-based object detection according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating another portion of the steps of an image-based object detection method according to an embodiment of the present application;
FIG. 5 is a block diagram of an image-based object detection apparatus according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The meaning of the reference symbols in the figures:
the detection system 100, the user terminal 101, the detection terminal 102, and the server 103.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, the detection system includes: user terminal, monitor terminal and server. The user terminal can be constructed as a smart phone, a user of the user terminal can send a request for detecting an object by taking a picture, and the monitoring terminal can comprise monitoring equipment in a skynet system and also can comprise monitoring equipment with access rights in various occasions.
Referring to fig. 2, an image-based object detection method according to an embodiment of the present application includes the following steps:
s1: receiving a detection request signal from a user terminal, and analyzing the detection request signal to obtain an original image and positioning data in the detection request signal.
S2: the original image is input to a target recognition model so that the target recognition model outputs a marker image having a target detection frame and a target type of the target detection frame.
S3: and sending the marked image to a user terminal and requesting the user terminal to feed back the selection of the target detection frame.
S4: and responding to the selection of the target detection frame fed back by the user terminal, and generating a target image according to the selected target detection frame.
S5: according to the positioning data and the target type of the target image, a plurality of monitoring terminals for object detection are selected and image request signals are sent to the monitoring terminals.
S6: and receiving a monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether an object pointed by the target image exists in the monitoring image.
Specifically, the target recognition model is a convolutional neural network, and the object recognition model is also a convolutional neural network.
As a specific scheme, as shown in fig. 3, step S2 further includes the following specific steps:
s21: and generating a target detection frame.
S22: and matching the target type and the corresponding confidence degree for each target detection box.
S23: judging whether the confidence of the target type of each target detection frame is greater than or equal to a preset confidence threshold, and if so, displaying the corresponding target detection frame in the marked image; and if not, not displaying the corresponding target detection frame in the marked image.
S24: and judging whether a target detection frame exists in the marked image, if so, outputting the marked image, and if not, returning to extract another original image in the detection request signal.
As a specific solution, the user terminal may capture a plurality of images included in the detection request signal, so as to recognize another image when the target recognition model cannot recognize the image.
As a preferred scheme, in order to implement the recognition function, a target recognition model needs to be trained, specifically including the following steps: collecting target detection training data and training a target recognition model by using the target detection training data; the input data of the target recognition model is an original picture, and the output data of the target recognition model is a target image with target detection frames, and the position coordinates and the corresponding target types of each target detection frame in the target image. The convolutional neural network model for generating the detection box and the image assignment is per se well known in the art and will not be described in detail here.
As a specific scheme, as shown in fig. 4, step S5 specifically includes the following steps:
s51: and clustering the plurality of monitoring terminals according to the historical images acquired by the monitoring terminals and the target data identified from the historical images so as to form a plurality of monitoring terminal groups.
S52: and selecting the monitoring terminal group where the closest monitoring terminal is located according to the positioning data and the target type of the target image.
S53: and setting a detection area taking the closest monitoring terminal as a circle center according to the positioning data and the target type of the target image.
S54: and selecting a plurality of monitoring terminals in all the monitoring terminal groups in the detection area, and sending image request signals to the selected plurality of monitoring terminals.
More specifically, in step S51, the monitoring terminals are enabled to capture images at regular time, then the detection frames in the images and the object types in the detection frames are obtained through the processing of the object identification model, then the images in the detection frames are input into the object identification model for identification, if an object appears in both the monitoring terminal a and the monitoring terminal B, a unique object number is given to the object, the object types of the object, such as a vehicle, a person, an animal and the like, are marked, then the object numbers appearing in the object are summarized corresponding to each monitoring terminal, clustering calculation is performed according to the similarity of the object numbers of each monitoring terminal, and the monitoring terminals belonging to one class after clustering form a monitoring group.
More specifically, step S52 is to obtain the closest monitoring device most quickly according to the position of the user terminal to acquire the image, and if this monitoring device does not acquire the required object image, then the next closest monitoring device is reached in sequence. The target image type is helpful for determining the camera type, such as that an indoor camera cannot acquire a vehicle image. Step S53 is to limit the detection region, thereby saving system resources.
Referring to fig. 6, an image-based object detection apparatus according to an embodiment of the present application includes: the analysis module is used for receiving the detection request signal from the user terminal and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal; the output module is used for inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame; the request module is used for sending the marked image to the user terminal and requesting the user terminal to feed back the selection of the target detection frame; the generating module is used for responding to the selection of the target detection frame fed back by the user terminal and generating a target image according to the selected target detection frame; the selecting module is used for selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals; and the judging module is used for receiving the monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether an object pointed by the target image exists in the monitoring image.
As shown with reference to fig. 5, an electronic device 800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing device 801, the ROM802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.: output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 808 including, for example, magnetic tape, hard disk, etc.: and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 8 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer storage medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through communications device 809, or installed from storage device 808, or installed from ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer storage media described above in some embodiments of the disclosure can be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer storage medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer storage medium may be one contained in the electronic device: or may exist separately without being assembled into the electronic device. The computer storage medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a detection request signal from a user terminal, and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal; inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame; sending the marked image to a user terminal and requesting the user terminal to feed back the selection of the target detection frame; responding to the selection of the target detection frame fed back by the user terminal, and generating a target image according to the selected target detection frame; selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals; and receiving the monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether the object pointed by the target image exists in the monitoring image.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, including the conventional procedural programming languages: such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, the names of which units do not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. An image-based object detection method, comprising:
receiving a detection request signal from a user terminal, and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal;
inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame;
sending the marked image to the user terminal and requesting the user terminal to feed back the selection of the target detection frame;
responding to the selection of the target detection frame fed back by the user terminal, and generating a target image according to the selected target detection frame;
selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals;
and receiving a monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether an object pointed by the target image exists in the monitoring image.
2. The image-based object detection method as claimed in claim 1, wherein the inputting the original image into a target recognition model to make the target recognition model output a labeled image with a target detection frame and a target type of the target detection frame comprises:
generating the target detection frame;
matching a target type and a corresponding confidence for each target detection box;
judging whether the confidence of the target type of each target detection frame is greater than or equal to a preset confidence threshold, and if so, displaying the corresponding target detection frame in the marked image; and if not, not displaying the corresponding target detection frame in the marker image.
3. The image-based object detection method of claim 2, wherein the inputting the original image into a target recognition model to make the target recognition model output a labeled image with a target detection frame and a target type of the target detection frame comprises:
and judging whether the target detection frame exists in the marked image, if so, outputting the marked image, and if not, returning to extract another original image in the detection request signal.
4. The image-based object detection method of claim 3, wherein the inputting the original image into a target recognition model to make the target recognition model output a labeled image with a target detection frame and a target type of the target detection frame comprises:
collecting target detection training data and training the target recognition model with the target detection training data;
the input data of the target recognition model is an original picture, and the output data of the target recognition model is a target image with the target detection frames, and the position coordinates and the corresponding target types of each target detection frame in the target image.
5. The image-based object detecting method according to claim 4, wherein the selecting a plurality of monitoring terminals for object detection and sending an image request signal to the monitoring terminals according to the positioning data and the target type of the target image comprises:
and clustering the plurality of monitoring terminals according to the historical images acquired by the monitoring terminals and the target data identified from the historical images so as to form a plurality of monitoring terminal groups.
6. The image-based object detecting method according to claim 5, wherein the selecting a plurality of monitoring terminals for object detection and sending an image request signal to the monitoring terminals according to the positioning data and the target type of the target image comprises:
and selecting the monitoring terminal group where the closest monitoring terminal is located according to the positioning data and the target type of the target image.
7. The image-based object detecting method according to claim 6, wherein the selecting a plurality of monitoring terminals for object detection and sending an image request signal to the monitoring terminals according to the positioning data and the target type of the target image comprises:
setting a detection area with the closest monitoring terminal as a circle center according to the positioning data and the target type of the target image;
and selecting a plurality of monitoring terminals in all the monitoring terminal groups in the detection area, and sending the image request signals to the selected plurality of monitoring terminals.
8. An image-based object detection apparatus, comprising:
the analysis module is used for receiving a detection request signal from a user terminal and analyzing the detection request signal to acquire an original image and positioning data in the detection request signal;
the output module is used for inputting the original image into a target recognition model so that the target recognition model outputs a mark image with a target detection frame and a target type of the target detection frame;
the request module is used for sending the marked image to the user terminal and requesting the user terminal to feed back the selection of the target detection frame;
the generating module is used for responding to the selection of the target detection frame fed back by the user terminal and generating a target image according to the selected target detection frame;
the selecting module is used for selecting a plurality of monitoring terminals for object detection according to the positioning data and the target type of the target image and sending image request signals to the monitoring terminals;
and the judging module is used for receiving the monitoring image fed back by the monitoring terminal and inputting the monitoring image into an object recognition model so that the object recognition model outputs a judgment result of whether the object pointed by the target image exists in the monitoring image.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the processors to implement the method of any one of claims 1 to 7.
10. A computer storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the method of any of claims 1 to 7.
CN202210272123.5A 2021-08-20 2022-03-18 Image-based object detection method, device, equipment and storage medium Pending CN114708545A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110961430 2021-08-20
CN202110961430X 2021-08-20

Publications (1)

Publication Number Publication Date
CN114708545A true CN114708545A (en) 2022-07-05

Family

ID=82168931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210272123.5A Pending CN114708545A (en) 2021-08-20 2022-03-18 Image-based object detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114708545A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071366A (en) * 2023-04-04 2023-05-05 北京城建集团有限责任公司 Reverse construction method monitoring method, system, equipment and storage medium based on image processing
CN117580082A (en) * 2024-01-16 2024-02-20 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Method for identifying positioning pseudo base station and positioning system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071366A (en) * 2023-04-04 2023-05-05 北京城建集团有限责任公司 Reverse construction method monitoring method, system, equipment and storage medium based on image processing
CN117580082A (en) * 2024-01-16 2024-02-20 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Method for identifying positioning pseudo base station and positioning system

Similar Documents

Publication Publication Date Title
CN114708545A (en) Image-based object detection method, device, equipment and storage medium
US20150186426A1 (en) Searching information using smart glasses
CN109040960A (en) A kind of method and apparatus for realizing location-based service
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110689804A (en) Method and apparatus for outputting information
CN110704491B (en) Data query method and device
US20190394717A1 (en) Method and apparatus for generating information
US10921131B1 (en) Systems and methods for interactive digital maps
Hamidi et al. Industry 4.0 urban mobility: goNpark smart parking tracking module
US20170280106A1 (en) Pubic safety camera identification and monitoring system and method
CN110866524A (en) License plate detection method, device, equipment and storage medium
CN112418026B (en) Vehicle violation detection method, system and device based on video detection
CN111782980B (en) Mining method, device, equipment and storage medium for map interest points
CN110737820A (en) Method and apparatus for generating event information
CN111310595B (en) Method and device for generating information
US10506201B2 (en) Public safety camera identification and monitoring system and method
WO2023226448A1 (en) Method and apparatus for generating logistics point-of-interest information, and device and computer-readable medium
CN113362090A (en) User behavior data processing method and device
KR100631095B1 (en) System for collecting and managing construct information by using GIS
CN115049624A (en) Method, device, equipment, medium and product for sending early warning information of highway cracks
CN111694875B (en) Method and device for outputting information
CN111401182B (en) Image detection method and device for feeding rail
CN111339394B (en) Method and device for acquiring information
CN110149358B (en) Data transmission method, system and device
TW202036446A (en) System, server and method for providing travel information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination