WO2022257647A1 - Camera detection method and apparatus, storage medium, and electronic device - Google Patents
Camera detection method and apparatus, storage medium, and electronic device Download PDFInfo
- Publication number
- WO2022257647A1 WO2022257647A1 PCT/CN2022/090626 CN2022090626W WO2022257647A1 WO 2022257647 A1 WO2022257647 A1 WO 2022257647A1 CN 2022090626 W CN2022090626 W CN 2022090626W WO 2022257647 A1 WO2022257647 A1 WO 2022257647A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- space
- detected
- camera
- time point
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 90
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000000875 corresponding effect Effects 0.000 claims description 42
- 238000010801 machine learning Methods 0.000 claims description 13
- 230000004044 response Effects 0.000 claims description 9
- 230000002596 correlated effect Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000010219 correlation analysis Methods 0.000 claims description 5
- 230000036544 posture Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 description 30
- 238000013481 data capture Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 17
- 238000007405 data analysis Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000007726 management method Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000023077 detection of light stimulus Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
Definitions
- the present disclosure relates to the technical field of information security, and in particular to a camera detection method, a camera detection device, a computer-readable storage medium, and electronic equipment.
- the disclosure provides a camera detection method, a camera detection device, a computer-readable storage medium, and electronic equipment.
- a camera detection method including: acquiring network data packets in the space to be detected; The second feature is matched; according to the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
- a camera detection device including a processor and a memory; the processor is used to execute the following program modules stored in the memory: a data acquisition module configured to acquire data in the space to be detected A network data packet; a feature matching module configured to match the first feature of the network data packet over time with the second feature of the space to be detected over time; the detection result determination module is configured to Based on the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the camera detection method of the above-mentioned first aspect and possible implementations thereof are implemented.
- an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to Execute the camera detection method of the above first aspect and its possible implementation.
- FIG. 1 shows a system architecture diagram of an operating environment in this exemplary embodiment
- FIG. 2 shows a schematic structural diagram of an electronic device in this exemplary embodiment
- FIG. 3 shows a flow chart of a camera detection method in this exemplary embodiment
- Fig. 4 shows a flow chart of determining a second feature in this exemplary embodiment
- FIG. 5 shows a schematic diagram of a camera detection interface in this exemplary embodiment
- FIG. 6 shows a schematic diagram of matching network data packets and flashlight working hours in this exemplary embodiment
- Fig. 7 shows a flow chart of determining the camera position in this exemplary embodiment
- Fig. 8 shows a flow chart of prompting the position of the camera in this exemplary embodiment
- FIG. 9 shows a schematic structural diagram of a camera detection device in this exemplary embodiment
- Fig. 10 shows a schematic structural diagram of another camera detection device in this exemplary embodiment.
- Example embodiments will now be described more fully with reference to the accompanying drawings.
- Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of example embodiments to those skilled in the art.
- the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
- numerous specific details are provided in order to give a thorough understanding of embodiments of the present disclosure.
- those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details being omitted, or other methods, components, devices, steps, etc. may be adopted.
- well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
- Fig. 1 shows a system architecture diagram of the operating environment of this exemplary embodiment.
- the system architecture 100 may include a data capture device 110 and a data analysis device 120 .
- the data capture device 110 may be a device with a network communication function, such as a mobile phone, a tablet computer, a personal computer, and the like.
- the data capture device 110 is located in the space to be detected, and is used to capture network data packets in the space to be detected.
- the space to be tested includes but is not limited to hotel rooms, bathrooms, changing rooms, and rental houses.
- the data capture device 110 and the data analysis device 120 may be connected through a wired or wireless communication link, so that the data capture device 110 sends the captured network data packets to the data analysis device 120 .
- the data analysis device 120 may be another terminal connected to the data capture device 110, or a background server that provides camera detection services.
- the data analysis device 120 is configured to analyze network data packets to detect whether there is a camera in the space to be detected.
- the system architecture 100 may further include a change construction device 130 for actively constructing changes in the space to be detected.
- the change configuration device 130 may be a flashlight device, which constructs the light and dark changes of the space to be detected by flashing light in the space to be detected.
- the change configuration device 130 may also be a projection device, which constructs pattern and texture changes of the space to be tested by projecting the space to be tested.
- the change construction device 130 may include a camera module, which is used to collect images of the space to be detected while constructing changes in the space to be detected, and the image may be sent to the data analysis device 120 for assisting the detection of the camera.
- any two or more of the data capture device 110, the data analysis device 120, and the change construction device 130 may be integrated into one device.
- the functions of the data capture device 110 and the change construction device 130 can be realized, which can construct changes in the space to be detected by controlling the work of the flashlight, and at the same time capture network data packets, and convert the network data packets Send it to the background server for analysis to realize camera detection; or, the mobile phone can also realize the function of data analysis device 120 at the same time, and analyze locally after capturing network data packets to realize camera detection.
- Exemplary embodiments of the present disclosure also provide an electronic device for performing the above camera detection method.
- the electronic device may be the data analysis device 120 described above.
- FIG. 2 Taking the mobile terminal 200 in FIG. 2 as an example below, the structure of the above-mentioned electronic device will be exemplarily described. Those skilled in the art will appreciate that, in addition to components specifically intended for mobile purposes, the configuration in Fig. 2 can also be applied to equipment of a stationary type.
- the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus, Universal Serial Bus) interface 230, a charging management module 240, a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291, flashlight 292, motor 293, button 294 and SIM (Subscriber Identification Module, Subscriber Identification Module) card interface 295, etc.
- a processor 210 an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus, Universal Serial Bus) interface 230, a charging management module 240, a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291, flashlight 292, motor 293, button 294 and SIM
- Processor 210 can include one or more processing units, for example: processor 210 can include AP (Application Processor, application processor), modem processor, GPU (Graphics Processing Unit, graphics processing unit), ISP (Image Signal Processor, image signal processor), controller, encoder, decoder, DSP (Digital Signal Processor, digital signal processor), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), etc.
- AP Application Processor
- modem processor GPU
- ISP Image Signal Processor
- ISP Image Signal Processor, image signal processor
- controller encoder, decoder
- DSP Digital Signal Processor, digital signal processor
- baseband processor and/or NPU Neuro-Network Processing Unit, neural network processor
- the encoder can encode (i.e. compress) the image or video data, for example, encode the image or video of the space to be detected to form the corresponding code stream data, so as to reduce the bandwidth occupied by data transmission; the decoder can compress the image or video code stream data to decode (that is, decompress) to restore image or video data.
- the mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 can process images or videos in multiple encoding formats, such as: JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group), PNG (Portable Network Graphics, portable network graphics), BMP (Bitmap, bitmap), etc. Image format, MPEG (Moving Picture Experts Group, moving picture expert group) 1, MPEG2, H.263, H.264, HEVC (High Efficiency Video Coding, high efficiency video coding) and other video formats.
- JPEG Joint Photographic Experts Group
- PNG Portable Network Graphics, portable network graphics
- BMP Bitmap
- the processor 210 may include one or more interfaces, and form connections with other components of the mobile terminal 200 through different interfaces.
- the internal memory 221 may be used to store computer-executable program codes including instructions.
- the internal memory 221 may include volatile memory and non-volatile memory.
- the processor 210 executes various functional applications and data processing of the mobile terminal 200 by executing instructions stored in the internal memory 221 .
- the external memory interface 222 can be used to connect an external memory, such as a Micro SD card, to expand the storage capacity of the mobile terminal 200.
- the external memory communicates with the processor 210 through the external memory interface 222 to implement a data storage function, such as storing images, videos and other files.
- the USB interface 230 is an interface conforming to the USB standard specification, and can be used to connect a charger to charge the mobile terminal 200 , and can also be connected to earphones or other electronic devices.
- the charging management module 240 is configured to receive charging input from the charger. While the charging management module 240 is charging the battery 242, it can also supply power to the device through the power management module 241; the power management module 241 can also monitor the state of the battery.
- the wireless communication function of the mobile terminal 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- the mobile communication module 250 can provide 2G, 3G, 4G, 5G and other mobile communication solutions applied on the mobile terminal 200 .
- the wireless communication module 260 can provide WLAN (Wireless Local Area Networks, wireless local area network) (such as Wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth, Bluetooth), GNSS (Global Navigation) applied on the mobile terminal 200 Satellite System, Global Navigation Satellite System), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short-range wireless communication technology), IR (Infrared, infrared technology) and other wireless communication solutions.
- WLAN Wireless Local Area Networks, wireless local area network
- Wi-Fi Wireless Fidelity, wireless fidelity
- BT Bluetooth, Bluetooth
- GNSS Global Navigation
- FM Frequency Modulation, frequency modulation
- NFC Near Field Communication, short-range wireless communication technology
- IR Infrared, infrared technology
- the mobile terminal 200 can realize a display function and display a user interface through the GPU, the display screen 290 and the AP. For example, when the user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application, application program) on the display screen 290 .
- a camera detection App Application, application program
- the mobile terminal 200 can realize the shooting function through the ISP, camera module 291 , encoder, decoder, GPU, display screen 290 and AP.
- the user can enable the image or video capture function in the camera detection App, and at this time, the image of the space to be detected can be collected through the camera module 291 .
- the mobile terminal 200 can implement audio functions through an audio module 270 , a speaker 271 , a receiver 272 , a microphone 273 , an earphone interface 274 , and an AP.
- the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, an air pressure sensor 2804, etc., so as to realize corresponding sensing and detection functions.
- the depth sensor 2801 can collect the depth data of the space to be detected
- the gyro sensor 2803 can collect the pose data of the mobile terminal 200, and these two kinds of data can assist in realizing the positioning of the camera.
- the flashlight 292 is used to increase the exposure of the space to be detected, so as to change its light and dark state.
- the flashlight 292 can be matched with the camera module 291 to form a specific relative positional relationship with the cameras in the camera module 291 .
- the flashlight 292 can work alone, and can also perform supporting work when the camera module 291 captures an image, such as flashing according to the shutter time.
- the flash light 292 can also serve as a reminder, for example, flashing reminders when a call comes in or the battery is too low.
- the motor 293 can generate vibration prompts, and can also be used for touch vibration feedback and the like.
- the keys 294 include a power key, a volume key and the like.
- the mobile terminal 200 may support one or more SIM card interfaces 295 for connecting SIM cards to implement functions such as calling and mobile communication.
- the camera detection method in this exemplary embodiment will be described below.
- the application scenarios of this method include but are not limited to: the user is in a hotel room, uses the mobile phone to open the camera detection App and grabs the network data packets, then executes the camera detection method of this exemplary embodiment, and displays the detection result in the App; or the mobile phone
- the network data packets are captured and uploaded to the server, and the server executes the camera detection method of this exemplary embodiment, and returns the detection results to the mobile phone for display.
- Figure 3 shows an exemplary flow of a camera detection method, which may include:
- Step S310 acquiring network data packets in the space to be detected
- Step S320 matching the first time-varying feature of the network data packet with the time-varying second feature of the space to be detected
- Step S330 according to the matching result of the first feature and the second feature, determine whether there is a camera in the space to be detected.
- the change of the space to be detected is constructed, and the change of the obtained network data packet is matched with the change of the space to be detected, so as to detect whether there is a camera in the space to be detected.
- the camera since the camera sends video data packets in a differential manner, which is correlated with changes in the space to be detected, it is highly accurate to detect the camera by matching the characteristics of changes in the network data packets and the space to be detected.
- this solution is suitable for detecting almost all cameras that need to be connected to the Internet, not limited to the detection of cameras equipped with supplementary light sources, and this solution is less affected by environmental lighting and other factors, and has lower requirements for the scene, which is conducive to reducing Cases of underreporting.
- this solution detects based on changes in the space to be detected, and will not make false alarms for nearby cameras outside the space to be detected, and has high reliability.
- step S310 network data packets in the space to be detected are acquired.
- the data capture device located in the space to be detected can capture network data packets, including but not limited to wireless local area network network data packets, Bluetooth network data packets, mobile network data packets, and the like. Relevant software or settings can be used on the data capture device to capture network data packets. Taking the capture of network data packets of wireless LAN as an example, setting the network card of the data capture device to promiscuous mode can capture all network data packets flowing through, no matter where their destination addresses are.
- the captured network data packets include data packets sent by all network devices within a certain range. If there is a camera in the space to be detected, the captured network data packets also include video data packets sent by the camera. In subsequent processing, the video data packets of the camera can be identified from the network data packets and detected. Therefore, this exemplary embodiment can realize the detection of the camera under the condition that there are cameras and other network devices in the space to be detected.
- the user may be guided to actively shut down other network devices.
- a prompt message may be displayed to prompt the user to turn off known network devices in the space to be detected, such as smart appliances, or to turn off the network connection function of these network devices.
- the data analysis device can obtain the network data packets from the data capture device for subsequent processing. If the data capture device and the data analysis device are two devices, the data capture device can send network data packets to the data analysis device through the network; if the data capture device and the data analysis device are one device, the The inter-process communication realizes the sending of network data packets.
- step S320 the above-mentioned first feature of the network data packet changing over time is matched with the second feature of the space to be detected changing over time.
- the first feature refers to the feature of the network data packet changing with time
- the second feature refers to the feature of the space to be detected changing with time
- the first and second are used here to distinguish different subjects of the feature.
- the first feature of the network data packet may be a feature of one or more indicators of the network data packet changing over time.
- Indicators include but are not limited to: the size of a single data packet, the time interval between adjacent data packets, the number of data packets per unit time, etc., which generally belong to the information of the network communication layer and can be obtained without decrypting the network data packets.
- the network data packet itself includes a time stamp, and the data capture device can also record time information when capturing network data packets, so it is easy to correspond the above indicators with time to analyze the changes of the indicators over time, and obtain the first a feature.
- the present disclosure does not limit the specific form of the first feature, for example, it may be a sequence formed by multiple time-index arrays.
- the change of the space to be detected refers to the change of the space to be detected that can cause a significant change in the image captured by the camera.
- active actions can be applied to the space to be detected to cause strong changes in light and shade, patterns, textures, etc., including but not limited to: switching flashlights, or switching lights and curtains in the space to be detected, causing Light and dark changes in the space to be detected; projecting or moving patterns in the space to be detected that have a large contrast with the space to be detected, such as projecting cartoon animations on the walls of the space to be detected, or moving a cartoon poster in and out of the space to be detected multiple times space, resulting in pattern and texture changes in the space to be detected.
- the above-mentioned second feature can be obtained by recording the time point at which the space to be detected changes.
- the present disclosure does not limit the specific form of the second feature, for example, it may be a sequence formed by multiple time points when the space to be detected changes.
- the space to be detected is changed by switching the flash lamp above, and the second feature can be determined according to the working time of the flash lamp.
- the camera detection method may include:
- Step S410 obtaining the working time of the flashlight for flashing or illuminating the space to be detected
- Step S420 determining the second characteristic of the time-varying space to be detected according to the working time of the flashlight.
- the flashlight when using a flashlight to construct the change of the space to be detected, the flashlight can be flashed, or the flashlight can be kept on for a period of time for illumination (such as a flashlight function on a mobile phone).
- the working time of the flashlight can be sent to the data analysis device by the flashlight device (such as the above-mentioned variable configuration device), or the working time of the flashlight can be obtained directly when the data analysis device controls the operation of the flashlight.
- the working time of the flash includes the start time and end time of each flash or illumination, or includes the start time and duration of each flash or illumination, and the end time can be calculated.
- the start time and end time of each flash or illumination are taken as the time points when the space to be detected changes, and the sequence formed by them can be used as the above-mentioned second feature.
- step S410 may include:
- the flashlight In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
- FIG. 5 shows a schematic diagram of a camera detection interface 500, which may be a user interface in a camera detection App.
- a flashlight control 510 is provided, and the user can click, press and hold the control 510 to flash once, and the system records the working time of the flashlight.
- the flashlight control 510 can also implement a function similar to a light switch, for example, when the user clicks or long presses the flashlight control 510, the flashlight is triggered to turn on and remains on, and when the user clicks or long presses the flashlight again, the flashlight is turned off, and the system records the working time of the flashlight.
- the App when the user first uses the camera detection App or the flash function in the camera detection App for the first time, the user needs to allow the camera detection App to obtain the permission to use the flash light. After the user agrees, the App can call related system services to Control the flash and obtain data such as the working time of the flash.
- camera detection methods can include:
- the second characteristic of the time-varying space to be detected is determined.
- the time control is used to enable the user to manually record the time when the space to be detected changes.
- relevant prompt information can be displayed to prompt the user to operate the time control when manually constructing changes in the space to be detected.
- the prompt information can be "Please turn on and off the main light of the room several times, and Click the xx button every time the light is turned on or off".
- the system records the time when the user operates the time control as the time when the space to be detected changes, and then the second feature can be obtained.
- the picture captured by the camera also changes.
- the camera usually uses differential encoding or transmission to process the picture, so when the picture changes, the video data packets sent by the camera will also change, for example, the size or number of video data packets increases significantly.
- the change of the space to be detected should be correlated with the change of the video data packets sent by the camera.
- step S320 may include:
- a pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
- the first sample features of a large number of network data packets changing over time and the corresponding second sample features of the test scene changing over time can be obtained in the test scene; a first sample feature and a corresponding second sample feature
- the sample features form a sample data group; a part of the sample data group is obtained when the actual structure of the test scene is changed, and the first sample feature is correlated with the second sample feature, and its labeled data (Ground truth) is 1;
- the first sample feature and the second sample feature in another part of the sample data group have no correlation, and its label data is 0; use the sample data group and its label data to train the initially constructed machine learning model, such as neural network
- the trained machine learning model is obtained.
- the above-mentioned first feature and second feature are input into the machine learning model, and a result of whether the two match is output.
- the above-mentioned first feature includes the first time point when the network data packet changes, and the number of the first time point may be multiple; the above-mentioned second feature includes the second time point when the space to be detected changes , the number of second time points can also be multiple.
- step S320 may include:
- the first time point is matched with the second time point.
- matching the first time point and the second time point is to determine whether there is a correlation between the two time points in time distribution.
- the fitted mutation point can be determined as the first time point; record the occurrence of the space to be detected The changed time point, that is, the second time point; pair the first time point with the second time point to obtain multiple time point pairs, and each time point pair includes a first time point and a corresponding second time point ; If the difference between the first time point and the second time point in each time point pair does not exceed the preset time difference threshold (can be set according to experience, such as 1 second, 3 seconds, etc.), then determine the first time point Matching with the second time point is successful.
- the preset time difference threshold can be set according to experience, such as 1 second, 3 seconds, etc.
- time compensation may be performed on the first time point or the second time point as a whole. For example, determine the time compensation value according to the time difference between the first first time point and the first second time point; then add the time compensation value to all the second time points; The time points are paired, and it is judged whether the difference between the first time point and the second time point in each time point pair does not exceed the time difference threshold, so as to obtain a matching result.
- FIG. 6 shows a schematic diagram of matching the time when the size of the network data packet changes (that is, the first time point) and the time when the flashlight works (that is, the second time point). It can be seen that although the first time point is not completely consistent with the second time point, the two show a strong correlation. After pairing the first time point with the second time point, calculate the time difference threshold for each time point pair whose time difference is less than 3 seconds, and it can be determined that the first time point and the second time point are successfully matched.
- the first feature and the second feature may be used as two variables for correlation analysis.
- the first feature may be the size of the network data packet at different times
- the second feature may be whether the flashlight is turned on at different times (the value is 1 if the flashlight is turned on, and the value is 0 if it is not turned on). Then, the statistical method of correlation analysis is used to analyze the two variables, and the probability value of the correlation is output. If the probability value reaches the preset probability threshold (such as 70%, 80%, etc.), the first feature and the second feature are determined. The match was successful.
- the preset probability threshold such as 70%, 80%, etc.
- step S330 according to the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
- the first feature when the first feature is successfully matched with the second feature, it is determined that there is a camera in the space to be detected, and when the match between the first feature and the second feature is unsuccessful, it is determined that there is no camera in the space to be detected. Furthermore, the corresponding detection results can be displayed in the camera detection interface.
- the video data packet sent by it will respond to the change of the space to be detected, so that the first feature matches the second feature.
- the camera is located outside the space to be detected and does not take pictures of the space to be detected, even if the signal of the video data packet it sends passes through the space to be detected and is captured by the data capture device, the video data packet will not be detected.
- the spatial change produces a response, the first feature does not match the second feature, and this exemplary embodiment does not misjudge that there is a camera. Therefore, in this exemplary embodiment, the detection range can be accurately locked in the space to be detected, so as to ensure the accuracy of the detection result.
- the network data packets captured in step S310 may include multiple different types of data packets, for example, data packets sent by multiple network devices in the space to be detected.
- Network data packets from different sources can be grouped according to the header information of the network data packets.
- the header information includes but is not limited to: IP address (Internet Protocol, network protocol address), MAC address (Media Access Control, media access control address, That is, physical address), encoding information, communication protocol information, etc.
- IP address Internet Protocol, network protocol address
- MAC address Media Access Control, media access control address, That is, physical address
- encoding information communication protocol information, etc.
- network data packets may be grouped according to their destination IP addresses, and network data packets with the same destination IP address are grouped into one group.
- time-varying first features of each group of network data packets can be analyzed separately to obtain multiple sets of first features, and each set of first features is matched with the time-varying second features of the space to be detected, when at least When a group of first features and second features are successfully matched, it is determined that a camera exists in the space to be detected.
- the camera detection method may include:
- step S330 it may be determined whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature.
- the format feature of the network data packet is a feature related to data format, communication protocol, etc., including but not limited to port, traffic, MAC address, etc.
- the camera sends a video data packet, it needs to be based on a specific data format, communication protocol, etc., so that the video data packet has a specific format feature, that is, the above-mentioned preset format feature.
- the format feature can be obtained by parsing it, and it can be matched with the preset format feature to detect whether the network data packet is a video data packet sent by the camera from the aspect of data format.
- the format characteristics of the network data packet can be input into another pre-trained machine learning model (different from the machine learning model matching the first feature and the second feature), through the processing and identification of the machine learning model , output whether it matches the preset format features.
- the final detection result may be determined in combination of the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature.
- the matching results of the above two aspects can be set as an "or" relationship, that is, when any of the matching results is a successful match, it is determined that there is a camera, thereby further reducing the situation of false positives; the matching results of the above two aspects can also be set as " and ", that is, when both matching results are successful, it is determined that there is a camera, thereby further reducing false positives.
- the present disclosure does not limit this.
- the methods adopted can be divided into the following two types:
- the first is the overall change in the structure of the space to be detected, such as switching lights, curtains, etc., which will cause changes in the light and shade of the entire space;
- the second is to construct local changes in the space to be detected, such as turning on a flashlight or projecting a cartoon animation for a certain area.
- the position of the camera can be further detected.
- the following uses a local change in the structure of the flashlight as an example for illustration. It should be understood that the principle of the solution is the same when the flashlight is replaced by other local changes in the structure.
- the flash can be made to flash in the space to be detected in various poses, so that the local area covered by the flash is different in different poses, and the working time of the flash in the various poses can be obtained.
- INS Inertia Navagation System, inertial navigation system
- the change structure equipment such as the above-mentioned gyroscope sensor, etc.
- a camera module can be configured in the changing structure equipment, and by collecting images of the space to be detected under different poses, visual positioning is performed to output the pose of the device.
- the camera detection method may include:
- Step S710 determining the second feature corresponding to each pose according to the working time of the flashlight in each pose
- Step S720 matching the first feature with the second feature corresponding to each pose
- Step S730 when the first feature is successfully matched with the second feature corresponding to at least one pose, it is determined that there is a camera in the space to be detected.
- the first feature is successfully matched with the second feature corresponding to at least one pose, indicating that the change of the network data packet is related to the change of the local area of the space to be detected corresponding to the pose, and it can be determined that there is a camera in the space to be detected, and The camera can capture pictures of the local area.
- the camera detection method may also include:
- Step S740 determining the position of the camera according to the above at least one pose.
- the above at least one pose is called a suspicious pose.
- a suspicious pose For example, it is possible to analyze the network data packets obtained under suspicious poses, and combine the principle of radio orientation to determine the orientation of the camera, and the deviation does not exceed 20 degrees.
- the camera detection method may also include:
- the position of the camera is estimated.
- the relative positional relationship between the camera used to collect images and the flashlight is fixed.
- the pose transformation relationship between the two can be determined, so that the pose of the camera when collecting images is converted into the pose of the flashlight, and the image and flashlight can be realized.
- the camera and the flashlight can also be a camera module provided in conjunction with each other.
- the structural device can be changed to a mobile phone, and the camera module on the mobile phone includes an RGB camera and a flashlight.
- the pose of the camera can also be equated to the pose of the flash.
- the local area covered by the flash is the local area that the camera can capture, and it is presumed that the camera is located in the opposite direction of this local area. Therefore, according to the correspondence between the image and the pose, the image corresponding to the opposite direction of the local area is found. For example, after determining the suspicious pose, rotate the suspicious pose by 180 degrees to obtain the reverse pose, acquire an image corresponding to the reverse pose, and determine that the camera is located in the area where the image is located.
- the user holds the mobile phone to perform flash and image acquisition in different areas of the space to be detected, and the mobile phone can create map data for the space to be detected based on the collected images and its own pose, such as a 3D point cloud. map; after the suspicious pose is determined, the image corresponding to the suspicious pose can be found in the collected images.
- the area where these images are located is the local area that the camera can capture; furthermore, the area where the image is located is determined in the map data, that is, the camera For the local area that can be photographed, according to the position relationship, the area in the opposite direction is further determined as the area where the camera may be located, so as to realize the estimation of the position of the camera.
- guidance information can be presented, so that the user can aim at different areas of the space to be detected in a reasonable posture to perform flash and image acquisition.
- the user's preferred mobile phone collects images of the entire space to be detected and uploads them to the server.
- the server creates map data for the space to be detected by executing the SLAM (Simultaneous Localization And Mapping) algorithm.
- the server plans a reasonable way to construct spatial flash changes for the user, guides the user to a suitable position (usually the central position) in the space to be detected, starts flashing from a certain direction, and guides the user to move clockwise. Or turn it counterclockwise, and every time it turns to a suitable angle, a prompt message "Please stay at this position and flash" will be displayed, so as to realize a reasonable and comprehensive detection of the entire space to be detected.
- Step S810 according to the position of the camera, determine the candidate image where the camera is located in the above multiple images
- Step S820 prompting the position of the camera according to the candidate image.
- the candidate image may be an image corresponding to the above reverse pose, or a corresponding image may be found according to the area where the camera may be located in the map data as the candidate image.
- Candidate images or local areas in candidate images can be displayed on the camera detection interface, and related text prompt information can also be displayed at the same time, such as "cameras may exist in the following areas", so that users can further Find cameras in the space to be detected.
- the area where the camera may be located may be marked in the map data of the space to be detected, and relevant text prompt information may be displayed, so as to facilitate further searching by the user.
- the camera detection device 900 may include:
- the data acquisition module 910 is configured to acquire network data packets in the space to be detected
- the feature matching module 920 is configured to match the above-mentioned first feature of the network data packet that changes with time with the second feature of the space to be detected that changes with time;
- the detection result determining module 930 is configured to determine whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature.
- the data acquisition module 910 is further configured to:
- the second feature of the time-varying space to be detected is determined according to the working time of the flashlight.
- the working time of the flashlight includes the working time of the flashlight in various postures.
- the data acquisition module 910 is configured to:
- the second feature corresponding to each pose is determined according to the working time of the flashlight in each pose.
- the detection result determination module 930 is configured to:
- the detection result determination module 930 is further configured to:
- the position of the camera is determined according to the above at least one pose.
- the data acquisition module 910 is further configured to:
- the detection result determination module 930 is configured to:
- the position of the camera is determined according to the at least one pose and the corresponding relationship between the image and the pose.
- the detection result determination module 930 is further configured to:
- the candidate image is an image corresponding to a reverse pose after at least one pose is rotated by 180 degrees.
- the data acquisition module 910 is configured to:
- the flashlight In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
- the data acquisition module 910 is configured to:
- the second characteristic of the time-varying space to be detected is determined.
- the feature matching module 920 is configured to:
- a pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
- the feature matching module 920 is further configured to:
- the label data of the sample data set if the first sample feature in the sample data set is correlated with the second sample feature, the label data is 1, if the sample data set The first sample feature has no correlation with the second sample feature, and the label data is 0;
- the machine learning model is trained by using the sample data set and its labeled data.
- the feature matching module 920 is further configured to:
- the detection result determination module 930 is configured to:
- the matching result of the first feature and the second feature and the matching result of the format feature of the network data packet and the preset format feature, it is determined whether there is a camera in the space to be detected.
- the detection result determination module 930 is configured to:
- the matching result of the first feature and the second feature is a matching result, and the matching result of the format feature of the network data packet and the preset format feature is also a successful match, it is determined that there is a camera in the space to be detected.
- the first feature includes the first time point when the network data packet changes;
- the second feature includes the second time point when the space to be detected changes;
- the second feature of the space to be detected changes with time for matching including:
- the first time point is matched with the second time point.
- the above-mentioned matching of the first time point and the second time point includes:
- the above-mentioned matching of the first time point and the second time point further includes:
- Time compensation is performed on the first time point or the second time point before pairing the first time point with the second time point.
- the above-mentioned matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time includes:
- the correlation analysis is performed to obtain the probability value of the correlation. If the probability value of the correlation reaches the probability threshold, it is determined that the first time point and the second time point are successfully matched.
- the camera detection device 1000 may include a processor 1010 and a memory 1020 .
- the memory 1020 stores the following program modules:
- the data acquisition module 1010 is configured to acquire network data packets in the space to be detected
- the feature matching module 1020 is configured to match the above-mentioned first feature of the network data packet that changes with time with the second feature of the space to be detected that changes with time;
- the detection result determining module 1030 is configured to determine whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature.
- the processor 1010 is used to execute the above program modules.
- the data acquisition module 1021 is further configured to:
- the second feature of the time-varying space to be detected is determined according to the working time of the flashlight.
- the working time of the flashlight includes the working time of the flashlight in various postures.
- the data acquisition module 1021 is configured to:
- the second feature corresponding to each pose is determined according to the working time of the flashlight in each pose.
- the detection result determination module 1023 is configured to:
- the detection result determination module 1023 is further configured to:
- the position of the camera is determined according to the above at least one pose.
- the data acquisition module 1021 is further configured to:
- the detection result determination module 1023 is configured to:
- the position of the camera is determined according to the at least one pose and the corresponding relationship between the image and the pose.
- the detection result determination module 1023 is further configured to:
- the candidate image is an image corresponding to a reverse pose after at least one pose is rotated by 180 degrees.
- the data acquisition module 1021 is configured to:
- the flashlight In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
- the data acquisition module 1021 is configured to:
- the second characteristic of the time-varying space to be detected is determined.
- the feature matching module 1022 is configured to:
- a pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
- the feature matching module 1022 is further configured to:
- the label data of the sample data set if the first sample feature in the sample data set is correlated with the second sample feature, the label data is 1, if the sample data set The first sample feature has no correlation with the second sample feature, and the label data is 0;
- the machine learning model is trained by using the sample data set and its labeled data.
- the feature matching module 1022 is further configured to:
- the detection result determination module 1023 is configured to:
- the matching result of the first feature and the second feature and the matching result of the format feature of the network data packet and the preset format feature, it is determined whether there is a camera in the space to be detected.
- the detection result determination module 1023 is configured to:
- the matching result of the first feature and the second feature is a matching result, and the matching result of the format feature of the network data packet and the preset format feature is also a successful match, it is determined that there is a camera in the space to be detected.
- the first feature includes the first time point when the network data packet changes;
- the second feature includes the second time point when the space to be detected changes;
- the second feature of the space to be detected changes with time for matching including:
- the first time point is matched with the second time point.
- the above-mentioned matching of the first time point and the second time point includes:
- the above-mentioned matching of the first time point and the second time point further includes:
- Time compensation is performed on the first time point or the second time point before pairing the first time point with the second time point.
- the above-mentioned matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time includes:
- the correlation analysis is performed to obtain the probability value of the correlation. If the probability value of the correlation reaches the probability threshold, it is determined that the first time point and the second time point are successfully matched.
- Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be realized in the form of a program product, which includes program code.
- the program product When the program product is run on the electronic device, the program code is used to make the electronic device The steps described in the "Exemplary Methods" section above in this specification according to various exemplary embodiments of the present disclosure are performed.
- the program product can be implemented as a portable compact disk read only memory (CD-ROM) and include program code, and can run on an electronic device, such as a personal computer.
- CD-ROM portable compact disk read only memory
- the program product of the present disclosure is not limited thereto.
- a readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus or device.
- a program product may take the form of any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- a computer readable signal medium may include a data signal carrying readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a readable signal medium may also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as "C" or similar programming language.
- the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
- the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., using an Internet service provider). business to connect via the Internet).
- LAN local area network
- WAN wide area network
- Internet service provider e.g., a wide area network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Studio Devices (AREA)
Abstract
A camera detection method and apparatus, a storage medium, and an electronic device. The method comprises: acquiring a network data packet in a space to be detected (S310); matching a first feature of the network data packet that changes over time with a second feature of the space that changes over time (S320); and according to the matching result of the first feature and the second feature, determining whether a camera is present in the space (S330). A camera can effectively be detected, and the detection result is highly accurate.
Description
本申请要求申请日为2021年06月10日,申请号为202110649875.4,名称为“摄像头检测方法、装置、存储介质与电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本文中。This application claims the priority of the Chinese patent application with the application date of June 10, 2021, the application number 202110649875.4, and the title of "camera detection method, device, storage medium and electronic equipment". The entire content of the Chinese patent application is incorporated by reference incorporated in this article.
本公开涉及信息安全技术领域,尤其涉及一种摄像头检测方法、摄像头检测装置、计算机可读存储介质与电子设备。The present disclosure relates to the technical field of information security, and in particular to a camera detection method, a camera detection device, a computer-readable storage medium, and electronic equipment.
随着电子设备与通信技术的发展,摄像头在各行业中的应用越来越广泛。然而,一些不法分子在酒店房间、卫生间、更衣室、出租屋等场所内安装摄像头,进行偷拍,给人们的隐私与人身安全带来极大侵害。With the development of electronic equipment and communication technology, cameras are more and more widely used in various industries. However, some criminals install cameras in hotel rooms, bathrooms, dressing rooms, rental houses and other places to take sneak shots, which greatly infringes on people's privacy and personal safety.
上述摄像头大多为针孔式,如果安装在插线孔、路由器、机顶盒、墙体缝隙等位置,则十分隐蔽,难以被发现。因此,如何有效检测出摄像头,是业界亟待解决的技术问题。Most of the above-mentioned cameras are of the pinhole type, and if they are installed in wire holes, routers, set-top boxes, gaps in walls, etc., they will be very hidden and difficult to be found. Therefore, how to effectively detect the camera is a technical problem to be solved urgently in the industry.
发明内容Contents of the invention
本公开提供一种摄像头检测方法、摄像头检测装置、计算机可读存储介质与电子设备。The disclosure provides a camera detection method, a camera detection device, a computer-readable storage medium, and electronic equipment.
根据本公开的第一方面,提供一种摄像头检测方法,包括:获取待检测空间内的网络数据包;将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配;根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头。According to the first aspect of the present disclosure, there is provided a camera detection method, including: acquiring network data packets in the space to be detected; The second feature is matched; according to the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
根据本公开的第二方面,提供一种摄像头检测装置,包括处理器与存储器;所述处理器用于执行所述存储器中存储的以下程序模块:数据获取模块,被配置为获取待检测空间内的网络数据包;特征匹配模块,被配置为将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配;检测结果确定模块,被配置为根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头。According to a second aspect of the present disclosure, there is provided a camera detection device, including a processor and a memory; the processor is used to execute the following program modules stored in the memory: a data acquisition module configured to acquire data in the space to be detected A network data packet; a feature matching module configured to match the first feature of the network data packet over time with the second feature of the space to be detected over time; the detection result determination module is configured to Based on the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
根据本公开的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面的摄像头检测方法及其可能的实现方式。According to a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the camera detection method of the above-mentioned first aspect and possible implementations thereof are implemented.
根据本公开的第四方面,提供一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行上述第一方面的摄像头检测方法及其可能的实现方式。According to a fourth aspect of the present disclosure, there is provided an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to Execute the camera detection method of the above first aspect and its possible implementation.
图1示出本示例性实施方式中运行环境的系统架构图;FIG. 1 shows a system architecture diagram of an operating environment in this exemplary embodiment;
图2示出本示例性实施方式中一种电子设备的结构示意图;FIG. 2 shows a schematic structural diagram of an electronic device in this exemplary embodiment;
图3示出本示例性实施方式中一种摄像头检测方法的流程图;FIG. 3 shows a flow chart of a camera detection method in this exemplary embodiment;
图4示出本示例性实施方式中一种确定第二特征的流程图;Fig. 4 shows a flow chart of determining a second feature in this exemplary embodiment;
图5示出本示例性实施方式中一种摄像头检测界面的示意图;FIG. 5 shows a schematic diagram of a camera detection interface in this exemplary embodiment;
图6示出本示例性实施方式中一种匹配网络数据包与闪光灯工作时间的示意图;FIG. 6 shows a schematic diagram of matching network data packets and flashlight working hours in this exemplary embodiment;
图7示出本示例性实施方式中一种确定摄像头位置的流程图;Fig. 7 shows a flow chart of determining the camera position in this exemplary embodiment;
图8示出本示例性实施方式中一种提示摄像头位置的流程图;Fig. 8 shows a flow chart of prompting the position of the camera in this exemplary embodiment;
图9示出本示例性实施方式中一种摄像头检测装置的结构示意图;FIG. 9 shows a schematic structural diagram of a camera detection device in this exemplary embodiment;
图10示出本示例性实施方式中另一种摄像头检测装置的结构示意图。Fig. 10 shows a schematic structural diagram of another camera detection device in this exemplary embodiment.
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided in order to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details being omitted, or other methods, components, devices, steps, etc. may be adopted. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus repeated descriptions thereof will be omitted. Some of the block diagrams shown in the drawings are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different network and/or processor means and/or microcontroller means.
附图中所示的流程图仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowcharts shown in the figures are illustrative only and do not necessarily include all steps. For example, some steps can be decomposed, and some steps can be combined or partly combined, so the actual execution sequence may be changed according to the actual situation.
相关技术的一种方案中,针对配置有红外等补光源的摄像头,通过检测房间内是否存在特定波长的光源,来判断是否存在摄像头。然而,对于光源的检测很容易受到环境光照的影响,难以保证准确性;并且,对于摄像头未配置补光源的情况,或者摄像头前方存在遮挡物的情况,该方法无法实现有效检测。In a solution in the related art, for a camera configured with a supplementary light source such as infrared, whether there is a camera is determined by detecting whether there is a light source of a specific wavelength in a room. However, the detection of light sources is easily affected by ambient light, and it is difficult to ensure accuracy; moreover, this method cannot achieve effective detection when the camera is not equipped with a supplementary light source, or when there is an occluded object in front of the camera.
鉴于上述问题,本公开的示例性实施方式首先提供一种摄像头检测方法。图1示出了本示例性实施方式运行环境的系统架构图。参考图1所示,该系统架构100可以包括数据抓取设备110与数据分析设备120。其中,数据抓取设备110可以是具备网络通讯功能的设备,如手机、平板电脑、个人电脑等。数据抓取设备110位于待检测空间内,用于抓取待检测空间内的网络数据包。待检测空间包括但不限于酒店房间、卫生间、更衣室、出租屋。数据抓取设备110与数据分析设备120可以通过有线或无线的通信链路形成连接,使得数据抓取设备110将所抓取的网络数据包发送至数据分析设备120。数据分析设备120可以是与数据抓取设备110连接的另一终端,或者提供摄像头检测服务的后台服务器。数据分析设备120用于对网络数据包进行分析,以检测待检测空间是否存在摄像头。In view of the above problems, exemplary embodiments of the present disclosure firstly provide a camera detection method. Fig. 1 shows a system architecture diagram of the operating environment of this exemplary embodiment. Referring to FIG. 1 , the system architecture 100 may include a data capture device 110 and a data analysis device 120 . Wherein, the data capture device 110 may be a device with a network communication function, such as a mobile phone, a tablet computer, a personal computer, and the like. The data capture device 110 is located in the space to be detected, and is used to capture network data packets in the space to be detected. The space to be tested includes but is not limited to hotel rooms, bathrooms, changing rooms, and rental houses. The data capture device 110 and the data analysis device 120 may be connected through a wired or wireless communication link, so that the data capture device 110 sends the captured network data packets to the data analysis device 120 . The data analysis device 120 may be another terminal connected to the data capture device 110, or a background server that provides camera detection services. The data analysis device 120 is configured to analyze network data packets to detect whether there is a camera in the space to be detected.
在一种实施方式中,系统架构100还可以包括变化构造设备130,用于主动构造待检测空间的变化。例如,变化构造设备130可以是闪光灯装置,通过对待检测空间进行闪光,构造出待检测空间的明暗变化。变化构造设备130也可以是投影装置,通过对待检测空间进行投影,构造出待检测空间的图案、纹理变化。变化构造设备130可以包括摄像模组,用于在构造待检测空间变化的同时,采集待检测空间的图像,该图像可以被发送至数据分析设备120,用于辅助检测摄像头。In one embodiment, the system architecture 100 may further include a change construction device 130 for actively constructing changes in the space to be detected. For example, the change configuration device 130 may be a flashlight device, which constructs the light and dark changes of the space to be detected by flashing light in the space to be detected. The change configuration device 130 may also be a projection device, which constructs pattern and texture changes of the space to be tested by projecting the space to be tested. The change construction device 130 may include a camera module, which is used to collect images of the space to be detected while constructing changes in the space to be detected, and the image may be sent to the data analysis device 120 for assisting the detection of the camera.
在一种实施方式中,可以将数据抓取设备110、数据分析设备120、变化构造设备130中的任意两者或以上集成到一台设备中。例如,基于一台具备闪光灯的手机,可以实现数据抓取设备110与变化构造设备130的功能,其可以通过控制闪光灯工作来构造待检测空间的变化,同时抓取网络数据包,将网络数据包发送至后台服务器进行分析,以实现摄像头检测;或者,该手机还可以同时实现数据分析设备120的功能,抓取网络数据包后在本地进行分析,以实现摄像头检测。In one embodiment, any two or more of the data capture device 110, the data analysis device 120, and the change construction device 130 may be integrated into one device. For example, based on a mobile phone equipped with a flashlight, the functions of the data capture device 110 and the change construction device 130 can be realized, which can construct changes in the space to be detected by controlling the work of the flashlight, and at the same time capture network data packets, and convert the network data packets Send it to the background server for analysis to realize camera detection; or, the mobile phone can also realize the function of data analysis device 120 at the same time, and analyze locally after capturing network data packets to realize camera detection.
本公开的示例性实施方式还提供一种电子设备,用于执行上述摄像头检测方法。该电子设备可以是 上述数据分析设备120。Exemplary embodiments of the present disclosure also provide an electronic device for performing the above camera detection method. The electronic device may be the data analysis device 120 described above.
下面以图2中的移动终端200为例,对上述电子设备的构造进行示例性说明。本领域技术人员应当理解,除了特别用于移动目的的部件之外,图2中的构造也能够应用于固定类型的设备。Taking the mobile terminal 200 in FIG. 2 as an example below, the structure of the above-mentioned electronic device will be exemplarily described. Those skilled in the art will appreciate that, in addition to components specifically intended for mobile purposes, the configuration in Fig. 2 can also be applied to equipment of a stationary type.
如图2所示,移动终端200具体可以包括:处理器210、内部存储器221、外部存储器接口222、USB(Universal Serial Bus,通用串行总线)接口230、充电管理模块240、电源管理模块241、电池242、天线1、天线2、移动通信模块250、无线通信模块260、音频模块270、扬声器271、受话器272、麦克风273、耳机接口274、传感器模块280、显示屏290、摄像模组291、闪光灯292、马达293、按键294以及SIM(Subscriber Identification Module,用户标识模块)卡接口295等。As shown in Figure 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a USB (Universal Serial Bus, Universal Serial Bus) interface 230, a charging management module 240, a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291, flashlight 292, motor 293, button 294 and SIM (Subscriber Identification Module, Subscriber Identification Module) card interface 295, etc.
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括AP(Application Processor,应用处理器)、调制解调处理器、GPU(Graphics Processing Unit,图形处理器)、ISP(Image Signal Processor,图像信号处理器)、控制器、编码器、解码器、DSP(Digital Signal Processor,数字信号处理器)、基带处理器和/或NPU(Neural-Network Processing Unit,神经网络处理器)等。Processor 210 can include one or more processing units, for example: processor 210 can include AP (Application Processor, application processor), modem processor, GPU (Graphics Processing Unit, graphics processing unit), ISP (Image Signal Processor, image signal processor), controller, encoder, decoder, DSP (Digital Signal Processor, digital signal processor), baseband processor and/or NPU (Neural-Network Processing Unit, neural network processor), etc.
编码器可以对图像或视频数据进行编码(即压缩),例如对采集的待检测空间的图像或视频进行编码,形成对应的码流数据,以减少数据传输所占的带宽;解码器可以对图像或视频的码流数据进行解码(即解压缩),以还原出图像或视频数据。移动终端200可以支持一种或多种编码器和解码器。这样,移动终端200可以处理多种编码格式的图像或视频,例如:JPEG(Joint Photographic Experts Group,联合图像专家组)、PNG(Portable Network Graphics,便携式网络图形)、BMP(Bitmap,位图)等图像格式,MPEG(Moving Picture Experts Group,动态图像专家组)1、MPEG2、H.263、H.264、HEVC(High Efficiency Video Coding,高效率视频编码)等视频格式。The encoder can encode (i.e. compress) the image or video data, for example, encode the image or video of the space to be detected to form the corresponding code stream data, so as to reduce the bandwidth occupied by data transmission; the decoder can compress the image or video code stream data to decode (that is, decompress) to restore image or video data. The mobile terminal 200 may support one or more encoders and decoders. In this way, the mobile terminal 200 can process images or videos in multiple encoding formats, such as: JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group), PNG (Portable Network Graphics, portable network graphics), BMP (Bitmap, bitmap), etc. Image format, MPEG (Moving Picture Experts Group, moving picture expert group) 1, MPEG2, H.263, H.264, HEVC (High Efficiency Video Coding, high efficiency video coding) and other video formats.
在一种实施方式中,处理器210可以包括一个或多个接口,通过不同的接口和移动终端200的其他部件形成连接。In one implementation manner, the processor 210 may include one or more interfaces, and form connections with other components of the mobile terminal 200 through different interfaces.
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器221可以包括易失性存储器与非易失性存储器。处理器210通过运行存储在内部存储器221的指令,执行移动终端200的各种功能应用以及数据处理。The internal memory 221 may be used to store computer-executable program codes including instructions. The internal memory 221 may include volatile memory and non-volatile memory. The processor 210 executes various functional applications and data processing of the mobile terminal 200 by executing instructions stored in the internal memory 221 .
外部存储器接口222可以用于连接外部存储器,例如Micro SD卡,实现扩展移动终端200的存储能力。外部存储器通过外部存储器接口222与处理器210通信,实现数据存储功能,例如存储图像,视频等文件。The external memory interface 222 can be used to connect an external memory, such as a Micro SD card, to expand the storage capacity of the mobile terminal 200. The external memory communicates with the processor 210 through the external memory interface 222 to implement a data storage function, such as storing images, videos and other files.
USB接口230是符合USB标准规范的接口,可以用于连接充电器为移动终端200充电,也可以连接耳机或其他电子设备。The USB interface 230 is an interface conforming to the USB standard specification, and can be used to connect a charger to charge the mobile terminal 200 , and can also be connected to earphones or other electronic devices.
充电管理模块240用于从充电器接收充电输入。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为设备供电;电源管理模块241还可以监测电池的状态。The charging management module 240 is configured to receive charging input from the charger. While the charging management module 240 is charging the battery 242, it can also supply power to the device through the power management module 241; the power management module 241 can also monitor the state of the battery.
移动终端200的无线通信功能可以通过天线1、天线2、移动通信模块250、无线通信模块260、调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。移动通信模块250可以提供应用在移动终端200上2G、3G、4G、5G等移动通信解决方案。无线通信模块260可以提供应用在移动终端200上的WLAN(Wireless Local Area Networks,无线局域网)(如Wi-Fi(Wireless Fidelity,无线保真)网络)、BT(Bluetooth,蓝牙)、GNSS(Global Navigation Satellite System,全球导航卫星系统)、FM(Frequency Modulation,调频)、NFC(Near Field Communication,近距离无线通信技术)、IR(Infrared,红外技术)等无线通信解决方案。The wireless communication function of the mobile terminal 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. The mobile communication module 250 can provide 2G, 3G, 4G, 5G and other mobile communication solutions applied on the mobile terminal 200 . The wireless communication module 260 can provide WLAN (Wireless Local Area Networks, wireless local area network) (such as Wi-Fi (Wireless Fidelity, wireless fidelity) network), BT (Bluetooth, Bluetooth), GNSS (Global Navigation) applied on the mobile terminal 200 Satellite System, Global Navigation Satellite System), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, short-range wireless communication technology), IR (Infrared, infrared technology) and other wireless communication solutions.
移动终端200可以通过GPU、显示屏290及AP等实现显示功能,显示用户界面。例如,当用户进行摄像头检测时,移动终端200可以在显示屏290中显示摄像头检测App(Application,应用程序)的 界面。The mobile terminal 200 can realize a display function and display a user interface through the GPU, the display screen 290 and the AP. For example, when the user performs camera detection, the mobile terminal 200 may display an interface of a camera detection App (Application, application program) on the display screen 290 .
移动终端200可以通过ISP、摄像模组291、编码器、解码器、GPU、显示屏290及AP等实现拍摄功能。例如,用户可以在摄像头检测App中开启图像或视频拍摄功能,此时可以通过摄像模组291采集待检测空间的图像。The mobile terminal 200 can realize the shooting function through the ISP, camera module 291 , encoder, decoder, GPU, display screen 290 and AP. For example, the user can enable the image or video capture function in the camera detection App, and at this time, the image of the space to be detected can be collected through the camera module 291 .
移动终端200可以通过音频模块270、扬声器271、受话器272、麦克风273、耳机接口274及AP等实现音频功能。The mobile terminal 200 can implement audio functions through an audio module 270 , a speaker 271 , a receiver 272 , a microphone 273 , an earphone interface 274 , and an AP.
传感器模块280可以包括深度传感器2801、压力传感器2802、陀螺仪传感器2803、气压传感器2804等,以实现相应的感应检测功能。其中,深度传感器2801可以采集待检测空间的深度数据,陀螺仪传感器2803可以采集移动终端200的位姿数据,这两种数据可以辅助实现摄像头的定位。The sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, an air pressure sensor 2804, etc., so as to realize corresponding sensing and detection functions. Wherein, the depth sensor 2801 can collect the depth data of the space to be detected, and the gyro sensor 2803 can collect the pose data of the mobile terminal 200, and these two kinds of data can assist in realizing the positioning of the camera.
闪光灯292用于增加待检测空间的曝光量,以改变其明暗状态。闪光灯292可以与摄像模组291配套设置,与摄像模组291中的摄像头形成特定的相对位置关系。闪光灯292可以单独工作,也可以在摄像模组291拍摄图像时进行配套工作,例如按照快门时间进行闪光。此外,闪光灯292还可以起到提示的作用,例如在来电、电量过低时进行闪烁提示。The flashlight 292 is used to increase the exposure of the space to be detected, so as to change its light and dark state. The flashlight 292 can be matched with the camera module 291 to form a specific relative positional relationship with the cameras in the camera module 291 . The flashlight 292 can work alone, and can also perform supporting work when the camera module 291 captures an image, such as flashing according to the shutter time. In addition, the flash light 292 can also serve as a reminder, for example, flashing reminders when a call comes in or the battery is too low.
马达293可以产生振动提示,也可以用于触摸振动反馈等。按键294包括开机键,音量键等。The motor 293 can generate vibration prompts, and can also be used for touch vibration feedback and the like. The keys 294 include a power key, a volume key and the like.
移动终端200可以支持一个或多个SIM卡接口295,用于连接SIM卡,以实现通话与移动通信等功能。The mobile terminal 200 may support one or more SIM card interfaces 295 for connecting SIM cards to implement functions such as calling and mobile communication.
下面对本示例性实施方式的摄像头检测方法进行说明。该方法的应用场景包括但不限于:用户处于酒店房间内,使用手机打开摄像头检测App并抓取网络数据包,然后执行本示例性实施方式的摄像头检测方法,在App中显示检测结果;或者手机抓取网络数据包后上传至服务器,由服务器执行本示例性实施方式的摄像头检测方法,并将检测结果返回至手机上加以显示。The camera detection method in this exemplary embodiment will be described below. The application scenarios of this method include but are not limited to: the user is in a hotel room, uses the mobile phone to open the camera detection App and grabs the network data packets, then executes the camera detection method of this exemplary embodiment, and displays the detection result in the App; or the mobile phone The network data packets are captured and uploaded to the server, and the server executes the camera detection method of this exemplary embodiment, and returns the detection results to the mobile phone for display.
图3示出了摄像头检测方法的示例性流程,可以包括:Figure 3 shows an exemplary flow of a camera detection method, which may include:
步骤S310,获取待检测空间内的网络数据包;Step S310, acquiring network data packets in the space to be detected;
步骤S320,将上述网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配;Step S320, matching the first time-varying feature of the network data packet with the time-varying second feature of the space to be detected;
步骤S330,根据第一特征与第二特征的匹配结果,确定待检测空间内是否存在摄像头。Step S330, according to the matching result of the first feature and the second feature, determine whether there is a camera in the space to be detected.
基于上述方法,构建待检测空间的变化,将所获取的网络数据包的变化与待检测空间的变化进行匹配,以此检测待检测空间内是否存在摄像头。一方面,由于摄像头通过差分方式发送视频数据包,其与待检测空间的变化具有相关性,通过匹配网络数据包与待检测空间两方面变化的特征来检测摄像头,具有较高的准确性。另一方面,本方案适用于检测几乎所有需要联网的摄像头,而不局限于检测配置补光源的摄像头,且本方案受到环境光照等因素的影响较小,对于场景的要求较低,有利于减少漏报的情况。再一方面,本方案基于待检测空间的变化进行检测,对于待检测空间以外的附近摄像头不会进行误报,具有较高的可靠性。Based on the above method, the change of the space to be detected is constructed, and the change of the obtained network data packet is matched with the change of the space to be detected, so as to detect whether there is a camera in the space to be detected. On the one hand, since the camera sends video data packets in a differential manner, which is correlated with changes in the space to be detected, it is highly accurate to detect the camera by matching the characteristics of changes in the network data packets and the space to be detected. On the other hand, this solution is suitable for detecting almost all cameras that need to be connected to the Internet, not limited to the detection of cameras equipped with supplementary light sources, and this solution is less affected by environmental lighting and other factors, and has lower requirements for the scene, which is conducive to reducing Cases of underreporting. On the other hand, this solution detects based on changes in the space to be detected, and will not make false alarms for nearby cameras outside the space to be detected, and has high reliability.
下面对图3中的每个步骤进行具体说明。Each step in Fig. 3 is described in detail below.
参考图3,在步骤S310中,获取待检测空间内的网络数据包。Referring to FIG. 3, in step S310, network data packets in the space to be detected are acquired.
位于待检测空间内的数据抓取设备可以抓取网络数据包,包括但不限于无线局域网的网络数据包、蓝牙网络数据包、移动网络数据包等。数据抓取设备上可以通过相关的软件或设置来实现网络数据包的抓取。以抓取无线局域网的网络数据包为例,将数据抓取设备的网卡设置为混杂模式,可以抓取所有流经的网络数据包,无论其目的地址是哪里。The data capture device located in the space to be detected can capture network data packets, including but not limited to wireless local area network network data packets, Bluetooth network data packets, mobile network data packets, and the like. Relevant software or settings can be used on the data capture device to capture network data packets. Taking the capture of network data packets of wireless LAN as an example, setting the network card of the data capture device to promiscuous mode can capture all network data packets flowing through, no matter where their destination addresses are.
所抓取的网络数据包包括一定范围内所有网络设备发送的数据包。如果待检测空间内存在摄像头,则抓取的网络数据包也包括摄像头发送的视频数据包。在后续处理中,可以从网络数据包中识别出摄像 头的视频数据包并加以检测。因此,本示例性实施方式可以在待检测空间内存在摄像头与其他网络设备的情况下,实现摄像头的检测。The captured network data packets include data packets sent by all network devices within a certain range. If there is a camera in the space to be detected, the captured network data packets also include video data packets sent by the camera. In subsequent processing, the video data packets of the camera can be identified from the network data packets and detected. Therefore, this exemplary embodiment can realize the detection of the camera under the condition that there are cameras and other network devices in the space to be detected.
在一种实施方式中,为了排除其他网络设备的影响,可以引导用户主动关闭其他网络设备。例如,当用户在数据抓取设备上开启摄像头检测服务时,可以显示提示信息,提示用户关闭待检测空间内已知的网络设备,如智能电器等,或者关闭这些网络设备的网络连接功能。由此,保证所抓取的网络数据包以摄像头发送的视频数据包为主,提高后续处理的效率与摄像头检测的准确性。In an implementation manner, in order to eliminate the influence of other network devices, the user may be guided to actively shut down other network devices. For example, when the user starts the camera detection service on the data capture device, a prompt message may be displayed to prompt the user to turn off known network devices in the space to be detected, such as smart appliances, or to turn off the network connection function of these network devices. Thus, it is ensured that the captured network data packets are mainly video data packets sent by the camera, and the efficiency of subsequent processing and the accuracy of camera detection are improved.
数据抓取设备抓取到网络数据包后,数据分析设备可以从数据抓取设备获取网络数据包,以用于后续处理。如果数据抓取设备与数据分析设备为两台设备,则数据抓取设备可以通过网络将网络数据包发送至数据分析设备,如果数据抓取设备与数据分析设备为一台设备,则可以通过内部的进程间通信实现网络数据包的发送。After the data capture device captures the network data packets, the data analysis device can obtain the network data packets from the data capture device for subsequent processing. If the data capture device and the data analysis device are two devices, the data capture device can send network data packets to the data analysis device through the network; if the data capture device and the data analysis device are one device, the The inter-process communication realizes the sending of network data packets.
继续参考图3,在步骤S320中,将上述网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配。Continuing to refer to FIG. 3 , in step S320 , the above-mentioned first feature of the network data packet changing over time is matched with the second feature of the space to be detected changing over time.
其中,第一特征指网络数据包随时间变化的特征,第二特征指待检测空间随时间变化的特征,第一、第二在这里用于区分特征的不同主体。Among them, the first feature refers to the feature of the network data packet changing with time, the second feature refers to the feature of the space to be detected changing with time, and the first and second are used here to distinguish different subjects of the feature.
网络数据包的第一特征可以是网络数据包的一种或多种指标随时间变化的特征。指标包括但不限于:单个数据包的大小,相邻数据包的时间间隔,单位时间内的数据包数量等,一般属于网络通信层的信息,无需对网络数据包进行解密即可得到。而网络数据包中本身包括时间戳,并且数据抓取设备在抓取网络数据包时也可以记录时间信息,因此很容易将上述指标与时间进行对应,以解析指标随时间变化的情况,得到第一特征。The first feature of the network data packet may be a feature of one or more indicators of the network data packet changing over time. Indicators include but are not limited to: the size of a single data packet, the time interval between adjacent data packets, the number of data packets per unit time, etc., which generally belong to the information of the network communication layer and can be obtained without decrypting the network data packets. The network data packet itself includes a time stamp, and the data capture device can also record time information when capturing network data packets, so it is easy to correspond the above indicators with time to analyze the changes of the indicators over time, and obtain the first a feature.
本公开对于第一特征的具体形式不做限定,例如其可以是多个时间-指标数组形成的序列。The present disclosure does not limit the specific form of the first feature, for example, it may be a sequence formed by multiple time-index arrays.
本示例性实施方式中,待检测空间的变化是指能够引起摄像头拍摄画面显著变化的待检测空间的变化。在进行摄像头检测时,可以对待检测空间施加主动作用,使其发生明暗、图案、纹理等方面的强烈变化,包括但不限于:通过开关闪光灯,或者开关待检测空间内的灯、窗帘等,造成待检测空间的明暗变化;在待检测空间内投影或移动与待检测空间本身反差较大的图案,例如向待检测空间的墙壁投影卡通动画,或者将一张卡通海报多次移进移出待检测空间,造成待检测空间的图案、纹理变化。In this exemplary embodiment, the change of the space to be detected refers to the change of the space to be detected that can cause a significant change in the image captured by the camera. When performing camera detection, active actions can be applied to the space to be detected to cause strong changes in light and shade, patterns, textures, etc., including but not limited to: switching flashlights, or switching lights and curtains in the space to be detected, causing Light and dark changes in the space to be detected; projecting or moving patterns in the space to be detected that have a large contrast with the space to be detected, such as projecting cartoon animations on the walls of the space to be detected, or moving a cartoon poster in and out of the space to be detected multiple times space, resulting in pattern and texture changes in the space to be detected.
通过记录待检测空间发生变化的时间点,可以得到上述第二特征。The above-mentioned second feature can be obtained by recording the time point at which the space to be detected changes.
本公开对于第二特征的具体形式不做限定,例如其可以是待检测空间发生变化的多个时间点形成的序列。The present disclosure does not limit the specific form of the second feature, for example, it may be a sequence formed by multiple time points when the space to be detected changes.
下面对于如何得到第二特征,进行具体的示例性说明。How to obtain the second feature will be specifically exemplified below.
在一种实施方式中,通过上述开关闪光灯造成待检测空间的变化,可以根据闪光灯的工作时间来确定第二特征。参考图4所示,摄像头检测方法可以包括:In an implementation manner, the space to be detected is changed by switching the flash lamp above, and the second feature can be determined according to the working time of the flash lamp. Referring to Figure 4, the camera detection method may include:
步骤S410,获取闪光灯对待检测空间进行闪光或照明的工作时间;Step S410, obtaining the working time of the flashlight for flashing or illuminating the space to be detected;
步骤S420,根据闪光灯的工作时间确定待检测空间随时间变化的第二特征。Step S420, determining the second characteristic of the time-varying space to be detected according to the working time of the flashlight.
需要说明的是,在通过闪光灯构造待检测空间的变化时,可以使闪光灯进行闪光,也可以使闪光灯维持开启一段时间以进行照明(如手机上的手电筒功能)。可以由闪光灯装置(如上述变化构造设备)将闪光灯的工作时间发送至数据分析设备,也可以由数据分析设备控制闪光灯工作时,直接获取闪光灯的工作时间。闪光灯的工作时间包括每次闪光或照明的开始时间与结束时间,或者包括每次闪光或照明的开始时间与持续时长,可以计算出结束时间。将每次闪光或照明的开始时间与结束时间作为待检测空间发生变化的时间点,其形成的序列可以作为上述第二特征。It should be noted that, when using a flashlight to construct the change of the space to be detected, the flashlight can be flashed, or the flashlight can be kept on for a period of time for illumination (such as a flashlight function on a mobile phone). The working time of the flashlight can be sent to the data analysis device by the flashlight device (such as the above-mentioned variable configuration device), or the working time of the flashlight can be obtained directly when the data analysis device controls the operation of the flashlight. The working time of the flash includes the start time and end time of each flash or illumination, or includes the start time and duration of each flash or illumination, and the end time can be calculated. The start time and end time of each flash or illumination are taken as the time points when the space to be detected changes, and the sequence formed by them can be used as the above-mentioned second feature.
闪光灯的工作可以由系统自动控制,也可以由用户手动控制。在一种实施方式中,步骤S410可以 包括:The work of the flashlight can be controlled automatically by the system or manually by the user. In one embodiment, step S410 may include:
响应于对摄像头检测界面中闪光灯控件的操作,控制闪光灯对待检测空间进行闪光或照明,并获取闪光灯的工作时间。In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
图5示出了摄像头检测界面500的示意图,该界面可以是摄像头检测App中的用户界面。在摄像头检测界面500中,提供闪光灯控件510,用户可以对其进行点击、长按等操作,以进行一次闪光,系统记录闪光灯的工作时间。应当理解,闪光灯控件510也可以实现类似照明开关的功能,例如用户点击或长按闪光灯控件510时触发打开闪光灯并保持常亮,用户再次点击或长按时关闭闪光灯,系统记录闪光灯的工作时间。FIG. 5 shows a schematic diagram of a camera detection interface 500, which may be a user interface in a camera detection App. In the camera detection interface 500, a flashlight control 510 is provided, and the user can click, press and hold the control 510 to flash once, and the system records the working time of the flashlight. It should be understood that the flashlight control 510 can also implement a function similar to a light switch, for example, when the user clicks or long presses the flashlight control 510, the flashlight is triggered to turn on and remains on, and when the user clicks or long presses the flashlight again, the flashlight is turned off, and the system records the working time of the flashlight.
在一种实施方式中,在用户首次使用摄像头检测App或者首次使用摄像头检测App中的闪光灯功能时,需要用户准许摄像头检测App获得闪光灯的使用权限,用户同意后,App可以调用相关的系统服务来控制闪光灯,并获取闪光灯的工作时间等数据。In one embodiment, when the user first uses the camera detection App or the flash function in the camera detection App for the first time, the user needs to allow the camera detection App to obtain the permission to use the flash light. After the user agrees, the App can call related system services to Control the flash and obtain data such as the working time of the flash.
在一种实施方式中,当用户不借助变化构造设备来对待检测空间造成变化时,例如采用上述手动开关窗帘、灯,移动卡通海报等方式,数据分析设备无法从变化构造设备获取待检测空间发生变化的时间。基于此,摄像头检测方法可以包括:In one embodiment, when the user does not change the space to be detected by means of the change structure device, for example, by manually switching curtains, lights, moving cartoon posters, etc., the data analysis device cannot obtain the occurrence of the space to be detected from the change structure device. time of change. Based on this, camera detection methods can include:
响应于对摄像头检测界面中时间控件的操作,确定待检测空间随时间变化的第二特征。In response to the operation of the time control in the camera detection interface, the second characteristic of the time-varying space to be detected is determined.
其中,时间控件用于使用户手动记录待检测空间发生变化的时间。在摄像头检测界面中,可以显示相关的提示信息,以提示用户在手动构造待检测空间的变化时对时间控件进行操作,例如提示信息可以是“请您多次打开与关闭房间的主灯,并在每一次开灯或关灯时点击xx按键”。系统记录用户操作时间控件的时间,作为待检测空间发生变化的时间,进而可以得到第二特征。Wherein, the time control is used to enable the user to manually record the time when the space to be detected changes. In the camera detection interface, relevant prompt information can be displayed to prompt the user to operate the time control when manually constructing changes in the space to be detected. For example, the prompt information can be "Please turn on and off the main light of the room several times, and Click the xx button every time the light is turned on or off". The system records the time when the user operates the time control as the time when the space to be detected changes, and then the second feature can be obtained.
在待检测空间发生变化时,摄像头所拍摄到的画面也发生变化。而摄像头通常对画面采用差分编码或传输的方式处理,因此在画面发生变化时,摄像头发送的视频数据包也会发生变化,例如视频数据包的大小或数量发生显著增加。When the space to be detected changes, the picture captured by the camera also changes. The camera usually uses differential encoding or transmission to process the picture, so when the picture changes, the video data packets sent by the camera will also change, for example, the size or number of video data packets increases significantly.
由上可见,当待检测空间内存在摄像头时,待检测空间的变化与摄像头发送的视频数据包的变化应当具有相关性。本示例性实施方式通过匹配上述第一特征与第二特征,以确定网络数据包与待检测空间的变化是否具有相关性。It can be seen from the above that when there is a camera in the space to be detected, the change of the space to be detected should be correlated with the change of the video data packets sent by the camera. In this exemplary embodiment, by matching the above-mentioned first feature and the second feature, it is determined whether the network data packet is related to the change in the space to be detected.
在一种实施方式中,步骤S320可以包括:In one embodiment, step S320 may include:
利用预先训练的机器学习模型对第一特征与第二特征进行处理,输出匹配结果。A pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
举例来说,可以在试验场景内获取大量网络数据包随时间变化的第一样本特征与对应的试验场景随时间变化的第二样本特征;将一个第一样本特征与对应的一个第二样本特征形成一个样本数据组;一部分样本数据组是在实际对试验场景构造变化的情况下获取的,其中的第一样本特征与第二样本特征具有相关性,其标注数据(Ground truth)为1;另一部分样本数据组中的第一样本特征与第二样本特征不具有相关性,其标注数据为0;利用样本数据组及其标注数据训练初始构建的机器学习模型,如可以是神经网络模型,当达到预设的准确率时,得到训练完成的机器学习模型。在实际检测中,将上述第一特征与第二特征输入该机器学习模型中,输出两者是否匹配的结果。For example, the first sample features of a large number of network data packets changing over time and the corresponding second sample features of the test scene changing over time can be obtained in the test scene; a first sample feature and a corresponding second sample feature The sample features form a sample data group; a part of the sample data group is obtained when the actual structure of the test scene is changed, and the first sample feature is correlated with the second sample feature, and its labeled data (Ground truth) is 1; The first sample feature and the second sample feature in another part of the sample data group have no correlation, and its label data is 0; use the sample data group and its label data to train the initially constructed machine learning model, such as neural network When the network model reaches the preset accuracy rate, the trained machine learning model is obtained. In actual detection, the above-mentioned first feature and second feature are input into the machine learning model, and a result of whether the two match is output.
在另一种实施方式中,上述第一特征包括网络数据包发生变化的第一时间点,第一时间点的数量可以是多个;上述第二特征包括待检测空间发生变化的第二时间点,第二时间点的数量也可以是多个。由此,步骤S320可以包括:In another embodiment, the above-mentioned first feature includes the first time point when the network data packet changes, and the number of the first time point may be multiple; the above-mentioned second feature includes the second time point when the space to be detected changes , the number of second time points can also be multiple. Thus, step S320 may include:
对第一时间点与第二时间点进行匹配。The first time point is matched with the second time point.
其中,对第一时间点与第二时间点进行匹配,是确定两种时间点在时间分布上是否存在相关性。Wherein, matching the first time point and the second time point is to determine whether there is a correlation between the two time points in time distribution.
举例来说,从网络数据包中提取时间戳与数据包大小,通过对数据包大小与时间的关系进行拟合, 可以将拟合得到的突变点确定为第一时间点;记录待检测空间发生变化的时间点,即第二时间点;对第一时间点与第二时间点进行配对,得到多个时间点对,每个时间点对包括一个第一时间点与对应的一个第二时间点;如果每个时间点对中的第一时间点与第二时间点间的差值不超过预设的时间差阈值(可根据经验设置,如1秒,3秒等),则确定第一时间点与第二时间点匹配成功。For example, extract the time stamp and data packet size from the network data packet, and by fitting the relationship between the data packet size and time, the fitted mutation point can be determined as the first time point; record the occurrence of the space to be detected The changed time point, that is, the second time point; pair the first time point with the second time point to obtain multiple time point pairs, and each time point pair includes a first time point and a corresponding second time point ; If the difference between the first time point and the second time point in each time point pair does not exceed the preset time difference threshold (can be set according to experience, such as 1 second, 3 seconds, etc.), then determine the first time point Matching with the second time point is successful.
考虑到摄像头发送视频数据包可能存在一定的延迟,可以对第一时间点或第二时间点整体进行时间补偿。例如,根据第一个第一时间点与第一个第二时间点间的时间差确定时间补偿值;然后对所有的第二时间点加上该时间补偿值;再对第一时间点与第二时间点进行配对,并判断每个时间点对中的第一时间点与第二时间点间的差值是否不超过时间差阈值,以得到匹配结果。Considering that there may be a certain delay in sending video data packets from the camera, time compensation may be performed on the first time point or the second time point as a whole. For example, determine the time compensation value according to the time difference between the first first time point and the first second time point; then add the time compensation value to all the second time points; The time points are paired, and it is judged whether the difference between the first time point and the second time point in each time point pair does not exceed the time difference threshold, so as to obtain a matching result.
图6示出了对网络数据包的大小发生变化的时间(即第一时间点)与闪光灯工作的时间(即第二时间点)进行匹配的示意图。可以看到,虽然第一时间点与第二时间点并不完全一致,但两者表现出强相关性。将第一时间点与第二时间点两两配对后,计算每个时间点对中的时间差值小于3秒的时间差阈值,可以确定第一时间点与第二时间点匹配成功。FIG. 6 shows a schematic diagram of matching the time when the size of the network data packet changes (that is, the first time point) and the time when the flashlight works (that is, the second time point). It can be seen that although the first time point is not completely consistent with the second time point, the two show a strong correlation. After pairing the first time point with the second time point, calculate the time difference threshold for each time point pair whose time difference is less than 3 seconds, and it can be determined that the first time point and the second time point are successfully matched.
在再一种实施方式中,可以将第一特征与第二特征作为两个变量,进行相关性分析。例如第一特征可以是不同时刻的网络数据包的大小,第二特征可以是不同时刻是否开启闪光灯(开启闪光灯则值为1,未开启则值为0)。进而采用相关性分析的统计方法对两个变量进行分析,输出相关性的概率值,如果概率值达到预设的概率阈值(如70%、80%等),则确定第一特征与第二特征匹配成功。In yet another implementation manner, the first feature and the second feature may be used as two variables for correlation analysis. For example, the first feature may be the size of the network data packet at different times, and the second feature may be whether the flashlight is turned on at different times (the value is 1 if the flashlight is turned on, and the value is 0 if it is not turned on). Then, the statistical method of correlation analysis is used to analyze the two variables, and the probability value of the correlation is output. If the probability value reaches the preset probability threshold (such as 70%, 80%, etc.), the first feature and the second feature are determined. The match was successful.
继续参考图3,在步骤S330中,根据第一特征与第二特征的匹配结果,确定待检测空间内是否存在摄像头。Continuing to refer to FIG. 3 , in step S330 , according to the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
其中,当第一特征与第二特征匹配成功时,确定待检测空间内存在摄像头,当第一特征与第二特征匹配不成功时,确定待检测空间内不存在摄像头。进而,可以在摄像头检测界面中显示相应的检测结果。Wherein, when the first feature is successfully matched with the second feature, it is determined that there is a camera in the space to be detected, and when the match between the first feature and the second feature is unsuccessful, it is determined that there is no camera in the space to be detected. Furthermore, the corresponding detection results can be displayed in the camera detection interface.
由上可见,只有当摄像头位于待检测空间内,拍摄待检测空间的画面时,其发送的视频数据包才会对待检测空间的变化产生响应,使得第一特征与第二特征发生匹配。当摄像头位于待检测空间外,并不拍摄待检测空间的画面时,即使其发送的视频数据包的信号经过待检测空间,并被数据抓取设备所抓取,由于视频数据包不会对待检测空间的变化产生响应,第一特征与第二特征不会发生匹配,本示例性实施方式不会误判为存在摄像头的情况。因此,本示例性实施方式可以将检测范围精确锁定在待检测空间内,保证检测结果的准确性。It can be seen from the above that only when the camera is located in the space to be detected and takes pictures of the space to be detected, the video data packet sent by it will respond to the change of the space to be detected, so that the first feature matches the second feature. When the camera is located outside the space to be detected and does not take pictures of the space to be detected, even if the signal of the video data packet it sends passes through the space to be detected and is captured by the data capture device, the video data packet will not be detected The spatial change produces a response, the first feature does not match the second feature, and this exemplary embodiment does not misjudge that there is a camera. Therefore, in this exemplary embodiment, the detection range can be accurately locked in the space to be detected, so as to ensure the accuracy of the detection result.
在一种实施方式中,步骤S310所抓取的网络数据包可能包括多种不同类型的数据包,例如由待检测空间内多个网络设备发送的数据包。可以根据网络数据包的包头信息对不同来源的网络数据包进行分组,包头信息包括但不限于:IP地址(Internet Protocol,网络协议地址)、MAC地址(Media Access Control,媒体存取控制位址,即物理地址)、编码信息、通信协议信息等。例如,可以根据网络数据包的目的IP地址对其分组,将目的IP地址相同的网络数据包分为一组。进而,可以对每组网络数据包分别解析其随时间变化的第一特征,得到多组第一特征,分别将每组第一特征与待检测空间随时间变化的第二特征进行匹配,当至少一组第一特征与第二特征匹配成功时,确定待检测空间内存在摄像头。In one embodiment, the network data packets captured in step S310 may include multiple different types of data packets, for example, data packets sent by multiple network devices in the space to be detected. Network data packets from different sources can be grouped according to the header information of the network data packets. The header information includes but is not limited to: IP address (Internet Protocol, network protocol address), MAC address (Media Access Control, media access control address, That is, physical address), encoding information, communication protocol information, etc. For example, network data packets may be grouped according to their destination IP addresses, and network data packets with the same destination IP address are grouped into one group. Furthermore, the time-varying first features of each group of network data packets can be analyzed separately to obtain multiple sets of first features, and each set of first features is matched with the time-varying second features of the space to be detected, when at least When a group of first features and second features are successfully matched, it is determined that a camera exists in the space to be detected.
在一种实施方式中,除了匹配第一特征与第二特征外,还可以对网络数据包其他方面的特征进行分析,以检测是否为摄像头发送的视频数据包。具体地,摄像头检测方法可以包括:In one embodiment, in addition to matching the first feature and the second feature, other features of the network data packet may also be analyzed to detect whether it is a video data packet sent by the camera. Specifically, the camera detection method may include:
将网络数据包的格式特征与预设格式特征进行匹配。Match format signatures of network packets to preset format signatures.
相应的,在步骤S330中,可以根据第一特征与第二特征的匹配结果,以及网络数据包的格式特征与预设格式特征的匹配结果,确定待检测空间内是否存在摄像头。Correspondingly, in step S330, it may be determined whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature.
其中,网络数据包的格式特征是与数据格式、通信协议等相关的特征,包括但不限于端口、流量、MAC地址等。摄像头发送视频数据包时,需要基于特定的数据格式、通信协议等,使得视频数据包具 有特定的格式特征,即上述预设格式特征。在抓取网络数据包后,可以解析得到其中的格式特征,将其与预设格式特征进行匹配,可以从数据格式方面检测网络数据包是否为摄像头发送的视频数据包。Wherein, the format feature of the network data packet is a feature related to data format, communication protocol, etc., including but not limited to port, traffic, MAC address, etc. When the camera sends a video data packet, it needs to be based on a specific data format, communication protocol, etc., so that the video data packet has a specific format feature, that is, the above-mentioned preset format feature. After the network data packet is captured, the format feature can be obtained by parsing it, and it can be matched with the preset format feature to detect whether the network data packet is a video data packet sent by the camera from the aspect of data format.
在一种实施方式中,可以将网络数据包的格式特征输入预先训练的另一机器学习模型(不同于匹配第一特征与第二特征的机器学习模型),通过该机器学习模型的处理与识别,输出其与预设格式特征是否匹配的结果。In one embodiment, the format characteristics of the network data packet can be input into another pre-trained machine learning model (different from the machine learning model matching the first feature and the second feature), through the processing and identification of the machine learning model , output whether it matches the preset format features.
本示例性实施方式可以结合第一特征与第二特征的匹配结果,以及网络数据包的格式特征与预设格式特征的匹配结果这两方面,来确定最终的检测结果。具体地,可以设置上述两方面匹配结果为“或”的关系,即其中任一匹配结果为匹配成功时,确定存在摄像头,从而进一步减少漏报的情况;也可以设置上述两方面匹配结果为“与”的关系,即两方面匹配结果均为匹配成功时,确定存在摄像头,从而进一步减少误报的情况。本公开对此不做限定。In this exemplary embodiment, the final detection result may be determined in combination of the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature. Specifically, the matching results of the above two aspects can be set as an "or" relationship, that is, when any of the matching results is a successful match, it is determined that there is a camera, thereby further reducing the situation of false positives; the matching results of the above two aspects can also be set as " and ", that is, when both matching results are successful, it is determined that there is a camera, thereby further reducing false positives. The present disclosure does not limit this.
在构造待检测空间的变化时,所采取的方式可分为以下两种:When constructing changes in the space to be detected, the methods adopted can be divided into the following two types:
第一种是对待检测空间构造整体变化,例如开关灯、窗帘等,会对整个空间的明暗造成变化;The first is the overall change in the structure of the space to be detected, such as switching lights, curtains, etc., which will cause changes in the light and shade of the entire space;
第二种是对待检测空间构造局部变化,例如对准某一区域打开闪光灯或投影卡通动画。The second is to construct local changes in the space to be detected, such as turning on a flashlight or projecting a cartoon animation for a certain area.
如果采用第二种方式,则可以进一步检测摄像头的位置。下面以闪光灯构造局部变化为例进行说明,应当理解,将闪光灯替换为其他构造局部变化的方式,方案实现的原理相同。If the second method is adopted, the position of the camera can be further detected. The following uses a local change in the structure of the flashlight as an example for illustration. It should be understood that the principle of the solution is the same when the flashlight is replaced by other local changes in the structure.
在一种实施方式中,可以使闪光灯在多种位姿下对待检测空间进行闪光,这样不同位姿下闪光所覆盖的局部区域不同,并获取闪光灯在多种位姿下的工作时间。其中,变化构造设备中可以配置INS(Inertia Navagation System,惯性导航系统),如上述陀螺仪传感器等,用于测量设备的位姿变化,并基于某一初始或参考位姿,输出设备的绝对位姿。或者,变化构造设备中可以配置摄像模组,通过在不同位姿下采集待检测空间的图像,进行视觉定位,以输出设备的位姿。相应的,参考图7所示,摄像头检测方法可以包括:In one embodiment, the flash can be made to flash in the space to be detected in various poses, so that the local area covered by the flash is different in different poses, and the working time of the flash in the various poses can be obtained. Among them, INS (Inertia Navagation System, inertial navigation system) can be configured in the change structure equipment, such as the above-mentioned gyroscope sensor, etc., to measure the pose change of the device, and output the absolute position of the device based on an initial or reference pose posture. Alternatively, a camera module can be configured in the changing structure equipment, and by collecting images of the space to be detected under different poses, visual positioning is performed to output the pose of the device. Correspondingly, as shown in FIG. 7, the camera detection method may include:
步骤S710,根据闪光灯在每种位姿下的工作时间确定每种位姿对应的第二特征;Step S710, determining the second feature corresponding to each pose according to the working time of the flashlight in each pose;
步骤S720,将第一特征分别与每种位姿对应的第二特征进行匹配;Step S720, matching the first feature with the second feature corresponding to each pose;
步骤S730,当第一特征与至少一种位姿对应的第二特征匹配成功时,确定待检测空间内存在摄像头。Step S730, when the first feature is successfully matched with the second feature corresponding to at least one pose, it is determined that there is a camera in the space to be detected.
其中,第一特征与至少一种位姿对应的第二特征匹配成功,说明网络数据包的变化与该位姿对应的待检测空间的局部区域变化相关,可以确定待检测空间内存在摄像头,并且该摄像头能够拍摄到该局部区域的画面。Wherein, the first feature is successfully matched with the second feature corresponding to at least one pose, indicating that the change of the network data packet is related to the change of the local area of the space to be detected corresponding to the pose, and it can be determined that there is a camera in the space to be detected, and The camera can capture pictures of the local area.
进一步的,参考图7所示,摄像头检测方法还可以包括:Further, as shown in FIG. 7, the camera detection method may also include:
步骤S740,根据上述至少一种位姿确定摄像头的位置。Step S740, determining the position of the camera according to the above at least one pose.
将上述至少一种位姿称为可疑位姿。举例来说,可以对可疑位姿下所获取的网络数据包进行解析,结合无线电定向的原理,确定摄像头的方位,其偏差不超过20度。The above at least one pose is called a suspicious pose. For example, it is possible to analyze the network data packets obtained under suspicious poses, and combine the principle of radio orientation to determine the orientation of the camera, and the deviation does not exceed 20 degrees.
在一种实施方式中,摄像头检测方法还可以包括:In one embodiment, the camera detection method may also include:
获取对待检测空间所采集的多张图像,并确定图像与闪光灯的位姿的对应关系;Obtain multiple images collected in the space to be detected, and determine the corresponding relationship between the image and the pose of the flash;
根据上述可疑位姿以及图像与位姿的对应关系,估计摄像头的位置。According to the above suspicious poses and the corresponding relationship between images and poses, the position of the camera is estimated.
其中,用于采集图像的相机与闪光灯的相对位置关系固定,通过预先标定,可以确定两者的位姿变换关系,从而将相机采集图像时的位姿转换为闪光灯的位姿,实现图像与闪光灯位姿的对应。相机与闪光灯也可以是配套设置的摄像模组,例如变化构造设备为手机,手机上的摄像模组包括RGB相机与闪光灯。为了简化计算,也可以将相机的位姿等同于闪光灯的位姿。Among them, the relative positional relationship between the camera used to collect images and the flashlight is fixed. Through pre-calibration, the pose transformation relationship between the two can be determined, so that the pose of the camera when collecting images is converted into the pose of the flashlight, and the image and flashlight can be realized. Pose correspondence. The camera and the flashlight can also be a camera module provided in conjunction with each other. For example, the structural device can be changed to a mobile phone, and the camera module on the mobile phone includes an RGB camera and a flashlight. To simplify the calculation, the pose of the camera can also be equated to the pose of the flash.
在上述可疑位姿下,闪光灯覆盖的局部区域是摄像头能够拍摄到的局部区域,推测摄像头位于该局 部区域的对面方向。因此,根据图像与位姿的对应关系,找到局部区域的对面方向所对应的图像。例如,确定可疑位姿后,将可疑位姿旋转180度,得到反向位姿,获取与反向位姿对应的图像,确定摄像头位于图像所在的区域中。In the above suspicious pose, the local area covered by the flash is the local area that the camera can capture, and it is presumed that the camera is located in the opposite direction of this local area. Therefore, according to the correspondence between the image and the pose, the image corresponding to the opposite direction of the local area is found. For example, after determining the suspicious pose, rotate the suspicious pose by 180 degrees to obtain the reverse pose, acquire an image corresponding to the reverse pose, and determine that the camera is located in the area where the image is located.
在一种实施方式中,用户手持手机对待检测空间的不同区域进行闪光与图像采集,手机可以根据所采集的图像以及自身的位姿,对待检测空间建立地图数据,如可以是三维点云形成的地图;确定可疑位姿后,可以在所采集的图像中找到可疑位姿对应的图像,这些图像所在的区域是摄像头能够拍摄到的局部区域;进而,在地图数据中确定图像所在区域,即摄像头能够拍摄到的局部区域,根据位置关系,进一步确定对面方向的区域为摄像头可能所在的区域,从而实现对摄像头位置的估计。In one embodiment, the user holds the mobile phone to perform flash and image acquisition in different areas of the space to be detected, and the mobile phone can create map data for the space to be detected based on the collected images and its own pose, such as a 3D point cloud. map; after the suspicious pose is determined, the image corresponding to the suspicious pose can be found in the collected images. The area where these images are located is the local area that the camera can capture; furthermore, the area where the image is located is determined in the map data, that is, the camera For the local area that can be photographed, according to the position relationship, the area in the opposite direction is further determined as the area where the camera may be located, so as to realize the estimation of the position of the camera.
在一种实施方式中,可以在用户手持手机对待检测空间的不同区域进行闪光与图像采集时,呈现引导信息,使用户以合理的位姿对准待检测空间的不同区域进行闪光与图像采集。例如,用户首选移动手机采集整个待检测空间的图像,并上传至服务器,服务器通过执行SLAM(Simultaneous Localization And Mapping)算法,对待检测空间建立地图数据。进而,服务器根据地图数据为用户规划出构造空间闪光变化的合理方式,引导用户走到待检测空间的某个合适位置(一般是中央位置),从某一方向开始进行闪光,引导用户沿顺时针或逆时针转动,每转动到合适的角度,显示提示信息“请停留在该位置并进行闪光”等,从而实现对整个待检测空间合理与全面地检测。In one embodiment, when the user holds the mobile phone to perform flash and image acquisition in different areas of the space to be detected, guidance information can be presented, so that the user can aim at different areas of the space to be detected in a reasonable posture to perform flash and image acquisition. For example, the user's preferred mobile phone collects images of the entire space to be detected and uploads them to the server. The server creates map data for the space to be detected by executing the SLAM (Simultaneous Localization And Mapping) algorithm. Furthermore, based on the map data, the server plans a reasonable way to construct spatial flash changes for the user, guides the user to a suitable position (usually the central position) in the space to be detected, starts flashing from a certain direction, and guides the user to move clockwise. Or turn it counterclockwise, and every time it turns to a suitable angle, a prompt message "Please stay at this position and flash" will be displayed, so as to realize a reasonable and comprehensive detection of the entire space to be detected.
在一种实施方式中,参考图8所示,在确定摄像头的位置后,还可以执行以下步骤:In one implementation manner, as shown in FIG. 8, after determining the position of the camera, the following steps may also be performed:
步骤S810,根据摄像头的位置,在上述多张图像中确定摄像头所在的候选图像;Step S810, according to the position of the camera, determine the candidate image where the camera is located in the above multiple images;
步骤S820,根据候选图像提示摄像头的位置。Step S820, prompting the position of the camera according to the candidate image.
其中,候选图像可以是上述反向位姿对应的图像,也可以根据地图数据中摄像头可能所在的区域,找到对应的图像,作为候选图像。可以在摄像头检测界面中显示候选图像或者候选图像中的局部区域(即摄像头所在的局部区域),还可以同时显示相关的文字提示信息,例如“以下区域中可能存在摄像头”,以便于用户进一步在待检测空间中查找摄像头。Wherein, the candidate image may be an image corresponding to the above reverse pose, or a corresponding image may be found according to the area where the camera may be located in the map data as the candidate image. Candidate images or local areas in candidate images (that is, the local area where the camera is located) can be displayed on the camera detection interface, and related text prompt information can also be displayed at the same time, such as "cameras may exist in the following areas", so that users can further Find cameras in the space to be detected.
进一步的,还可以在候选图像中检测是否存在可疑的光源,或者提示用户将相机重新对准候选图像的区域,系统检测是否存在可疑的光源,从而更加精确地锁定摄像头的位置。Furthermore, it is also possible to detect whether there is a suspicious light source in the candidate image, or prompt the user to re-point the camera to the area of the candidate image, and the system detects whether there is a suspicious light source, thereby locking the position of the camera more precisely.
在一种实施方式中,可以在待检测空间的地图数据中标注出摄像头可能所在的区域,并显示相关的文字提示信息,以便于用户进一步查找。In one embodiment, the area where the camera may be located may be marked in the map data of the space to be detected, and relevant text prompt information may be displayed, so as to facilitate further searching by the user.
本公开的示例性实施方式还提供一种摄像头检测装置,可以配置于上述分析设备中。参考图9所示,该摄像头检测装置900可以包括:Exemplary embodiments of the present disclosure also provide a camera detection device, which can be configured in the above analysis device. Referring to FIG. 9, the camera detection device 900 may include:
数据获取模块910,被配置为获取待检测空间内的网络数据包;The data acquisition module 910 is configured to acquire network data packets in the space to be detected;
特征匹配模块920,被配置为将上述网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配;The feature matching module 920 is configured to match the above-mentioned first feature of the network data packet that changes with time with the second feature of the space to be detected that changes with time;
检测结果确定模块930,被配置为根据第一特征与第二特征的匹配结果,确定待检测空间内是否存在摄像头。The detection result determining module 930 is configured to determine whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature.
在一种实施方式中,数据获取模块910,还被配置为:In one embodiment, the data acquisition module 910 is further configured to:
获取闪光灯对待检测空间进行闪光或照明的工作时间;Obtain the working time of the flash for flashing or illuminating the space to be detected;
根据闪光灯的工作时间确定待检测空间随时间变化的第二特征。The second feature of the time-varying space to be detected is determined according to the working time of the flashlight.
在一种实施方式中,闪光灯的工作时间包括闪光灯在多种位姿下的工作时间。In one embodiment, the working time of the flashlight includes the working time of the flashlight in various postures.
数据获取模块910,被配置为:The data acquisition module 910 is configured to:
根据闪光灯在每种位姿下的工作时间确定每种位姿对应的第二特征。The second feature corresponding to each pose is determined according to the working time of the flashlight in each pose.
检测结果确定模块930,被配置为:The detection result determination module 930 is configured to:
当第一特征与至少一种位姿对应的第二特征匹配成功时,确定待检测空间内存在摄像头。When the first feature is successfully matched with the second feature corresponding to at least one pose, it is determined that a camera exists in the space to be detected.
在一种实施方式中,检测结果确定模块930,还被配置为:In one embodiment, the detection result determination module 930 is further configured to:
根据上述至少一种位姿确定摄像头的位置。The position of the camera is determined according to the above at least one pose.
在一种实施方式中,数据获取模块910,还被配置为:In one embodiment, the data acquisition module 910 is further configured to:
获取对待检测空间所采集的多张图像,并确定图像与闪光灯的位姿的对应关系。Obtain multiple images collected in the space to be detected, and determine the corresponding relationship between the image and the pose of the flashlight.
检测结果确定模块930,被配置为:The detection result determination module 930 is configured to:
根据上述至少一种位姿以及图像与位姿的对应关系,确定摄像头的位置。The position of the camera is determined according to the at least one pose and the corresponding relationship between the image and the pose.
在一种实施方式中,检测结果确定模块930,还被配置为:In one embodiment, the detection result determination module 930 is further configured to:
在确定摄像头的位置后,根据该位置在上述多张图像中确定摄像头所在的候选图像;After determining the position of the camera, determine the candidate image where the camera is located in the above multiple images according to the position;
根据候选图像提示摄像头的位置。Prompt camera location based on candidate images.
在一种实施方式中,候选图像为上述至少一种位姿旋转180度后的反向位姿所对应的图像。In an implementation manner, the candidate image is an image corresponding to a reverse pose after at least one pose is rotated by 180 degrees.
在一种实施方式中,数据获取模块910,被配置为:In one embodiment, the data acquisition module 910 is configured to:
响应于对摄像头检测界面中闪光灯控件的操作,控制闪光灯对待检测空间进行闪光或照明,并获取闪光灯的工作时间。In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
在一种实施方式中,数据获取模块910,被配置为:In one embodiment, the data acquisition module 910 is configured to:
响应于对摄像头检测界面中时间控件的操作,确定待检测空间随时间变化的第二特征。In response to the operation of the time control in the camera detection interface, the second characteristic of the time-varying space to be detected is determined.
在一种实施方式中,特征匹配模块920,被配置为:In one embodiment, the feature matching module 920 is configured to:
利用预先训练的机器学习模型对第一特征与第二特征进行处理,输出匹配结果。A pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
在一种实施方式中,特征匹配模块920,还被配置为:In one embodiment, the feature matching module 920 is further configured to:
在试验场景内获取网络数据包随时间变化的第一样本特征与对应的所述试验场景随时间变化的第二样本特征;Obtaining the first sample characteristics of network data packets changing over time and the corresponding second sample characteristics of the experiment scenario changing over time in the test scenario;
将一个所述第一样本特征与对应的一个所述第二样本特征形成一个样本数据组,以得到多个所述样本数据组;Forming one of the first sample features and one of the corresponding second sample features into a sample data set to obtain a plurality of the sample data sets;
获取所述样本数据组的标注数据,若所述样本数据组中的所述第一样本特征与所述第二样本特征具有相关性,则标注数据为1,若所述样本数据组中的所述第一样本特征与所述第二样本特征不具有相关性,则标注数据为0;Obtain the label data of the sample data set, if the first sample feature in the sample data set is correlated with the second sample feature, the label data is 1, if the sample data set The first sample feature has no correlation with the second sample feature, and the label data is 0;
利用所述样本数据组及其标注数据训练所述机器学习模型。The machine learning model is trained by using the sample data set and its labeled data.
在一种实施方式中,特征匹配模块920,还被配置为:In one embodiment, the feature matching module 920 is further configured to:
将网络数据包的格式特征与预设格式特征进行匹配;Matching the format characteristics of the network data packet with the preset format characteristics;
检测结果确定模块930,被配置为:The detection result determination module 930 is configured to:
根据第一特征与第二特征的匹配结果,以及网络数据包的格式特征与预设格式特征的匹配结果,确定待检测空间内是否存在摄像头。According to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature, it is determined whether there is a camera in the space to be detected.
在一种实施方式中,检测结果确定模块930,被配置为:In one embodiment, the detection result determination module 930 is configured to:
如果第一特征与第二特征的匹配结果为匹配成果,且网络数据包的格式特征与预设格式特征的匹配结果也为匹配成功,则确定待检测空间内存在摄像头。If the matching result of the first feature and the second feature is a matching result, and the matching result of the format feature of the network data packet and the preset format feature is also a successful match, it is determined that there is a camera in the space to be detected.
在一种实施方式中,第一特征包括网络数据包发生变化的第一时间点;第二特征包括待检测空间发生变化的第二时间点;上述将网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配,包括:In one embodiment, the first feature includes the first time point when the network data packet changes; the second feature includes the second time point when the space to be detected changes; The second feature of the space to be detected changes with time for matching, including:
对第一时间点与第二时间点进行匹配。The first time point is matched with the second time point.
在一种实施方式中,上述对第一时间点与第二时间点进行匹配,包括:In one embodiment, the above-mentioned matching of the first time point and the second time point includes:
将第一时间点与第二时间点进行配对,以得到多个时间点对,每个时间点对包括一个第一时间点与对应的一个第二时间点;pairing the first time point with the second time point to obtain a plurality of time point pairs, each time point pair including a first time point and a corresponding second time point;
如果每个时间点对中的第一时间点与第二时间点间的差值不超过时间差阈值,则确定第一时间点与第二时间点匹配成功。If the difference between the first time point and the second time point in each time point pair does not exceed the time difference threshold, it is determined that the first time point and the second time point are successfully matched.
在一种实施方式中,上述对第一时间点与第二时间点进行匹配,还包括:In one embodiment, the above-mentioned matching of the first time point and the second time point further includes:
在将第一时间点与第二时间点进行配对前,对第一时间点或第二时间点进行时间补偿。Time compensation is performed on the first time point or the second time point before pairing the first time point with the second time point.
在一种实施方式中,上述将网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配,包括:In one embodiment, the above-mentioned matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time includes:
将第一特征与第二特征作为两个变量,进行相关性分析,以得到相关性的概率值,如果相关性的概率值达到概率阈值,则确定第一时间点与第二时间点匹配成功。Taking the first feature and the second feature as two variables, the correlation analysis is performed to obtain the probability value of the correlation. If the probability value of the correlation reaches the probability threshold, it is determined that the first time point and the second time point are successfully matched.
本公开的示例性实施方式还提供另一种摄像头检测装置。参考图10所示,该摄像头检测装置1000可以包括处理器1010与存储器1020。其中,存储器1020存储有以下程序模块:Exemplary embodiments of the present disclosure also provide another camera detection device. Referring to FIG. 10 , the camera detection device 1000 may include a processor 1010 and a memory 1020 . Wherein, the memory 1020 stores the following program modules:
数据获取模块1010,被配置为获取待检测空间内的网络数据包;The data acquisition module 1010 is configured to acquire network data packets in the space to be detected;
特征匹配模块1020,被配置为将上述网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配;The feature matching module 1020 is configured to match the above-mentioned first feature of the network data packet that changes with time with the second feature of the space to be detected that changes with time;
检测结果确定模块1030,被配置为根据第一特征与第二特征的匹配结果,确定待检测空间内是否存在摄像头。The detection result determining module 1030 is configured to determine whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature.
处理器1010用于执行上述程序模块。The processor 1010 is used to execute the above program modules.
在一种实施方式中,数据获取模块1021,还被配置为:In one embodiment, the data acquisition module 1021 is further configured to:
获取闪光灯对待检测空间进行闪光或照明的工作时间;Obtain the working time of the flash for flashing or illuminating the space to be detected;
根据闪光灯的工作时间确定待检测空间随时间变化的第二特征。The second feature of the time-varying space to be detected is determined according to the working time of the flashlight.
在一种实施方式中,闪光灯的工作时间包括闪光灯在多种位姿下的工作时间。In one embodiment, the working time of the flashlight includes the working time of the flashlight in various postures.
数据获取模块1021,被配置为:The data acquisition module 1021 is configured to:
根据闪光灯在每种位姿下的工作时间确定每种位姿对应的第二特征。The second feature corresponding to each pose is determined according to the working time of the flashlight in each pose.
检测结果确定模块1023,被配置为:The detection result determination module 1023 is configured to:
当第一特征与至少一种位姿对应的第二特征匹配成功时,确定待检测空间内存在摄像头。When the first feature is successfully matched with the second feature corresponding to at least one pose, it is determined that a camera exists in the space to be detected.
在一种实施方式中,检测结果确定模块1023,还被配置为:In one embodiment, the detection result determination module 1023 is further configured to:
根据上述至少一种位姿确定摄像头的位置。The position of the camera is determined according to the above at least one pose.
在一种实施方式中,数据获取模块1021,还被配置为:In one embodiment, the data acquisition module 1021 is further configured to:
获取对待检测空间所采集的多张图像,并确定图像与闪光灯的位姿的对应关系。Obtain multiple images collected in the space to be detected, and determine the corresponding relationship between the image and the pose of the flashlight.
检测结果确定模块1023,被配置为:The detection result determination module 1023 is configured to:
根据上述至少一种位姿以及图像与位姿的对应关系,确定摄像头的位置。The position of the camera is determined according to the at least one pose and the corresponding relationship between the image and the pose.
在一种实施方式中,检测结果确定模块1023,还被配置为:In one embodiment, the detection result determination module 1023 is further configured to:
在确定摄像头的位置后,根据该位置在上述多张图像中确定摄像头所在的候选图像;After determining the position of the camera, determine the candidate image where the camera is located in the above multiple images according to the position;
根据候选图像提示摄像头的位置。Prompt camera location based on candidate images.
在一种实施方式中,候选图像为上述至少一种位姿旋转180度后的反向位姿所对应的图像。In an implementation manner, the candidate image is an image corresponding to a reverse pose after at least one pose is rotated by 180 degrees.
在一种实施方式中,数据获取模块1021,被配置为:In one embodiment, the data acquisition module 1021 is configured to:
响应于对摄像头检测界面中闪光灯控件的操作,控制闪光灯对待检测空间进行闪光或照明,并获取闪光灯的工作时间。In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is obtained.
在一种实施方式中,数据获取模块1021,被配置为:In one embodiment, the data acquisition module 1021 is configured to:
响应于对摄像头检测界面中时间控件的操作,确定待检测空间随时间变化的第二特征。In response to the operation of the time control in the camera detection interface, the second characteristic of the time-varying space to be detected is determined.
在一种实施方式中,特征匹配模块1022,被配置为:In one embodiment, the feature matching module 1022 is configured to:
利用预先训练的机器学习模型对第一特征与第二特征进行处理,输出匹配结果。A pre-trained machine learning model is used to process the first feature and the second feature, and output a matching result.
在一种实施方式中,特征匹配模块1022,还被配置为:In one embodiment, the feature matching module 1022 is further configured to:
在试验场景内获取网络数据包随时间变化的第一样本特征与对应的所述试验场景随时间变化的第二样本特征;Obtaining the first sample characteristics of network data packets changing over time and the corresponding second sample characteristics of the experiment scenario changing over time in the test scenario;
将一个所述第一样本特征与对应的一个所述第二样本特征形成一个样本数据组,以得到多个所述样本数据组;Forming one of the first sample features and one of the corresponding second sample features into a sample data set to obtain a plurality of the sample data sets;
获取所述样本数据组的标注数据,若所述样本数据组中的所述第一样本特征与所述第二样本特征具有相关性,则标注数据为1,若所述样本数据组中的所述第一样本特征与所述第二样本特征不具有相关性,则标注数据为0;Obtain the label data of the sample data set, if the first sample feature in the sample data set is correlated with the second sample feature, the label data is 1, if the sample data set The first sample feature has no correlation with the second sample feature, and the label data is 0;
利用所述样本数据组及其标注数据训练所述机器学习模型。The machine learning model is trained by using the sample data set and its labeled data.
在一种实施方式中,特征匹配模块1022,还被配置为:In one embodiment, the feature matching module 1022 is further configured to:
将网络数据包的格式特征与预设格式特征进行匹配;Matching the format characteristics of the network data packet with the preset format characteristics;
检测结果确定模块1023,被配置为:The detection result determination module 1023 is configured to:
根据第一特征与第二特征的匹配结果,以及网络数据包的格式特征与预设格式特征的匹配结果,确定待检测空间内是否存在摄像头。According to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature, it is determined whether there is a camera in the space to be detected.
在一种实施方式中,检测结果确定模块1023,被配置为:In one embodiment, the detection result determination module 1023 is configured to:
如果第一特征与第二特征的匹配结果为匹配成果,且网络数据包的格式特征与预设格式特征的匹配结果也为匹配成功,则确定待检测空间内存在摄像头。If the matching result of the first feature and the second feature is a matching result, and the matching result of the format feature of the network data packet and the preset format feature is also a successful match, it is determined that there is a camera in the space to be detected.
在一种实施方式中,第一特征包括网络数据包发生变化的第一时间点;第二特征包括待检测空间发生变化的第二时间点;上述将网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配,包括:In one embodiment, the first feature includes the first time point when the network data packet changes; the second feature includes the second time point when the space to be detected changes; The second feature of the space to be detected changes with time for matching, including:
对第一时间点与第二时间点进行匹配。The first time point is matched with the second time point.
在一种实施方式中,上述对第一时间点与第二时间点进行匹配,包括:In one embodiment, the above-mentioned matching of the first time point and the second time point includes:
将第一时间点与第二时间点进行配对,以得到多个时间点对,每个时间点对包括一个第一时间点与对应的一个第二时间点;pairing the first time point with the second time point to obtain a plurality of time point pairs, each time point pair including a first time point and a corresponding second time point;
如果每个时间点对中的第一时间点与第二时间点间的差值不超过时间差阈值,则确定第一时间点与第二时间点匹配成功。If the difference between the first time point and the second time point in each time point pair does not exceed the time difference threshold, it is determined that the first time point and the second time point are successfully matched.
在一种实施方式中,上述对第一时间点与第二时间点进行匹配,还包括:In one embodiment, the above-mentioned matching of the first time point and the second time point further includes:
在将第一时间点与第二时间点进行配对前,对第一时间点或第二时间点进行时间补偿。Time compensation is performed on the first time point or the second time point before pairing the first time point with the second time point.
在一种实施方式中,上述将网络数据包随时间变化的第一特征与待检测空间随时间变化的第二特征进行匹配,包括:In one embodiment, the above-mentioned matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time includes:
将第一特征与第二特征作为两个变量,进行相关性分析,以得到相关性的概率值,如果相关性的概率值达到概率阈值,则确定第一时间点与第二时间点匹配成功。Taking the first feature and the second feature as two variables, the correlation analysis is performed to obtain the probability value of the correlation. If the probability value of the correlation reaches the probability threshold, it is determined that the first time point and the second time point are successfully matched.
上述装置中各部分的细节在方法部分实施方式中已经详细说明,因而不再赘述。The details of each part of the above device have been described in detail in some implementations of the method, and thus will not be repeated here.
本公开的示例性实施方式还提供了一种计算机可读存储介质,可以实现为一种程序产品的形式,其包括程序代码,当程序产品在电子设备上运行时,程序代码用于使电子设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。在一种实施方式中,该程序产品可以实现为便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在电子设备,例如个人电脑上运行。 然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be realized in the form of a program product, which includes program code. When the program product is run on the electronic device, the program code is used to make the electronic device The steps described in the "Exemplary Methods" section above in this specification according to various exemplary embodiments of the present disclosure are performed. In one embodiment, the program product can be implemented as a portable compact disk read only memory (CD-ROM) and include program code, and can run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto. In this document, a readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus or device.
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。A program product may take the form of any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in conjunction with an instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language - such as "C" or similar programming language. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute. In cases involving a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., using an Internet service provider). business to connect via the Internet).
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的示例性实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. Actually, according to the exemplary embodiment of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above can be further divided to be embodied by a plurality of modules or units.
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施方式。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施方式仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art can understand that various aspects of the present disclosure can be implemented as a system, method or program product. Therefore, various aspects of the present disclosure can be embodied in the following forms, namely: a complete hardware implementation, a complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software, which can be collectively referred to herein as "circuit", "module" or "system". Other embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any modification, use or adaptation of the present disclosure, and these modifications, uses or adaptations follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field not disclosed in the present disclosure . The specification and embodiments are to be considered as exemplary only, with the true scope and spirit of the disclosure indicated by the appended claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限定。It should be understood that the present disclosure is not limited to the precise constructions which have been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (20)
- 一种摄像头检测方法,其特征在于,包括:A camera detection method, characterized in that, comprising:获取待检测空间内的网络数据包;Obtain network data packets in the space to be detected;将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配;Matching the first feature of the network data packet changing with time with the second feature of the space to be detected changing with time;根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头。According to the matching result of the first feature and the second feature, it is determined whether there is a camera in the space to be detected.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:获取闪光灯对所述待检测空间进行闪光或照明的工作时间;Acquiring the working time of the flashlight for flashing or illuminating the space to be detected;根据所述闪光灯的工作时间确定所述待检测空间随时间变化的第二特征。The second characteristic of the space to be detected changing with time is determined according to the working time of the flashlight.
- 根据权利要求2所述的方法,其特征在于,所述闪光灯的工作时间包括所述闪光灯在多种位姿下的工作时间;所述根据所述闪光灯的工作时间确定所述待检测空间随时间变化的第二特征,包括:The method according to claim 2, characterized in that, the working time of the flash light includes the working time of the flash light in various postures; the determination of the space to be detected over time according to the working time of the flash light Variations in secondary characteristics, including:根据所述闪光灯在每种位姿下的工作时间确定所述每种位姿对应的所述第二特征;determining the second feature corresponding to each pose according to the working time of the flashlight in each pose;所述根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头,包括:The determining whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature includes:当所述第一特征与至少一种位姿对应的所述第二特征匹配成功时,确定所述待检测空间内存在摄像头。When the first feature is successfully matched with the second feature corresponding to at least one pose, it is determined that a camera exists in the space to be detected.
- 根据权利要求3所述的方法,其特征在于,当所述第一特征与至少一种位姿对应的所述第二特征匹配成功时,所述方法还包括:The method according to claim 3, wherein when the first feature is successfully matched with the second feature corresponding to at least one pose, the method further comprises:根据所述至少一种位姿确定所述摄像头的位置。The position of the camera is determined according to the at least one pose.
- 根据权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, characterized in that the method further comprises:获取对所述待检测空间所采集的多张图像,并确定所述图像与所述闪光灯的位姿的对应关系;Acquiring multiple images collected from the space to be detected, and determining the correspondence between the images and the pose of the flashlight;所述根据所述至少一种位姿确定所述摄像头的位置,包括:The determining the position of the camera according to the at least one pose includes:根据所述至少一种位姿以及所述图像与所述位姿的对应关系,确定所述摄像头的位置。The position of the camera is determined according to the at least one pose and the corresponding relationship between the image and the pose.
- 根据权利要求5所述的方法,其特征在于,在确定所述摄像头的位置后,所述方法还包括:The method according to claim 5, wherein after determining the position of the camera, the method further comprises:根据所述摄像头的位置,在所述多张图像中确定所述摄像头所在的候选图像;According to the position of the camera, determine the candidate image where the camera is located in the plurality of images;根据所述候选图像提示所述摄像头的位置。Prompting the position of the camera according to the candidate image.
- 根据权利要求6所述的方法,其特征在于,所述候选图像为所述至少一种位姿旋转180度后的反向位姿所对应的图像。The method according to claim 6, wherein the candidate image is an image corresponding to a reverse pose after the at least one pose is rotated by 180 degrees.
- 根据权利要求2所述的方法,其特征在于,所述获取闪光灯对所述待检测空间进行闪光或照明的工作时间,包括:The method according to claim 2, wherein said obtaining the working time of the flashlight for flashing or illuminating the space to be detected comprises:响应于对摄像头检测界面中闪光灯控件的操作,控制所述闪光灯对所述待检测空间进行闪光或照明,并获取所述闪光灯的工作时间。In response to the operation of the flashlight control in the camera detection interface, the flashlight is controlled to flash or illuminate the space to be detected, and the working time of the flashlight is acquired.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:响应于对摄像头检测界面中时间控件的操作,确定所述待检测空间随时间变化的第二特征。In response to the operation of the time control in the camera detection interface, the second feature of the time-varying space to be detected is determined.
- 根据权利要求1所述的方法,其特征在于,所述将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配,包括:The method according to claim 1, wherein the matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time comprises:利用预先训练的机器学习模型对所述第一特征与所述第二特征进行处理,输出匹配结果。The first feature and the second feature are processed by using a pre-trained machine learning model, and a matching result is output.
- 根据权利要求10所述的方法,其特征在于,所述方法还包括:The method according to claim 10, characterized in that the method further comprises:在试验场景内获取网络数据包随时间变化的第一样本特征与对应的所述试验场景随时间变化的第二样本特征;Obtaining the first sample characteristics of network data packets changing over time and the corresponding second sample characteristics of the experiment scenario changing over time in the test scenario;将一个所述第一样本特征与对应的一个所述第二样本特征形成一个样本数据组,以得到多个所述样本数据组;Forming one of the first sample features and one of the corresponding second sample features into a sample data set to obtain a plurality of the sample data sets;获取所述样本数据组的标注数据,若所述样本数据组中的所述第一样本特征与所述第二样本特征具有相关性,则标注数据为1,若所述样本数据组中的所述第一样本特征与所述第二样本特征不具有相关性,则标注数据为0;Obtain the label data of the sample data set, if the first sample feature in the sample data set is correlated with the second sample feature, the label data is 1, if the sample data set The first sample feature has no correlation with the second sample feature, and the label data is 0;利用所述样本数据组及其标注数据训练所述机器学习模型。The machine learning model is trained by using the sample data set and its labeled data.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:将所述网络数据包的格式特征与预设格式特征进行匹配;Matching the format feature of the network data packet with the preset format feature;所述根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头,包括:The determining whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature includes:根据所述第一特征与所述第二特征的匹配结果,以及所述网络数据包的格式特征与所述预设格式特征的匹配结果,确定所述待检测空间内是否存在摄像头。According to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature, it is determined whether there is a camera in the space to be detected.
- 根据权利要求12所述的方法,其特征在于,所述根据所述第一特征与所述第二特征的匹配结果,以及所述网络数据包的格式特征与所述预设格式特征的匹配结果,确定所述待检测空间内是否存在摄像头,包括:The method according to claim 12, wherein, according to the matching result of the first feature and the second feature, and the matching result of the format feature of the network data packet and the preset format feature , to determine whether there is a camera in the space to be detected, including:如果所述第一特征与所述第二特征的匹配结果为匹配成果,且所述网络数据包的格式特征与所述预设格式特征的匹配结果也为匹配成功,则确定所述待检测空间内存在摄像头。If the matching result of the first feature and the second feature is a matching result, and the matching result of the format feature of the network data packet and the preset format feature is also a successful match, then determine the space to be detected There is a camera in memory.
- 根据权利要求1所述的方法,其特征在于,所述第一特征包括所述网络数据包发生变化的第一时间点;所述第二特征包括所述待检测空间发生变化的第二时间点;所述将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配,包括:The method according to claim 1, wherein the first feature includes a first time point when the network data packet changes; the second feature includes a second time point when the space to be detected changes ; The matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time includes:对所述第一时间点与所述第二时间点进行匹配。Match the first time point with the second time point.
- 根据权利要求14所述的方法,其特征在于,所述对所述第一时间点与所述第二时间点进行匹配,包括:The method according to claim 14, wherein the matching the first time point and the second time point comprises:将所述第一时间点与所述第二时间点进行配对,以得到多个时间点对,每个时间点对包括一个所述第一时间点与对应的一个所述第二时间点;pairing the first time point with the second time point to obtain a plurality of time point pairs, each time point pair including one of the first time point and a corresponding one of the second time point;如果每个时间点对中的所述第一时间点与所述第二时间点间的差值不超过时间差阈值,则确定所述第一时间点与所述第二时间点匹配成功。If the difference between the first time point and the second time point in each time point pair does not exceed a time difference threshold, it is determined that the first time point and the second time point are successfully matched.
- 根据权利要求15所述的方法,其特征在于,所述对所述第一时间点与所述第二时间点进行匹配,还包括:The method according to claim 15, wherein the matching the first time point and the second time point further comprises:在将所述第一时间点与所述第二时间点进行配对前,对所述第一时间点或所述第二时间点进行时间补偿。Before pairing the first time point with the second time point, time compensation is performed on the first time point or the second time point.
- 根据权利要求1所述的方法,其特征在于,所述将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配,包括:The method according to claim 1, wherein the matching of the first feature of the network data packet changing over time with the second feature of the space to be detected changing over time comprises:将所述第一特征与所述第二特征作为两个变量,进行相关性分析,以得到相关性的概率值,如果所述相关性的概率值达到概率阈值,则确定所述第一时间点与所述第二时间点匹配成功。Taking the first feature and the second feature as two variables, performing a correlation analysis to obtain a probability value of the correlation, and if the probability value of the correlation reaches a probability threshold, determine the first time point The matching with the second time point is successful.
- 一种摄像头检测装置,其特征在于,包括处理器与存储器,所述处理器用于执行所述存储器中存储的以下程序模块:A camera detection device is characterized in that it includes a processor and a memory, and the processor is used to execute the following program modules stored in the memory:数据获取模块,被配置为获取待检测空间内的网络数据包;A data acquisition module configured to acquire network data packets in the space to be detected;特征匹配模块,被配置为将所述网络数据包随时间变化的第一特征与所述待检测空间随时间变化的第二特征进行匹配;A feature matching module configured to match the first feature of the network data packet that changes with time with the second feature of the space to be detected that changes with time;检测结果确定模块,被配置为根据所述第一特征与所述第二特征的匹配结果,确定所述待检测空间内是否存在摄像头。The detection result determining module is configured to determine whether there is a camera in the space to be detected according to the matching result of the first feature and the second feature.
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执 行时实现权利要求1至17任一项所述的方法。A computer-readable storage medium on which a computer program is stored, wherein the computer program implements the method according to any one of claims 1 to 17 when executed by a processor.
- 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:处理器;以及processor; and存储器,用于存储所述处理器的可执行指令;a memory for storing executable instructions of the processor;其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至17任一项所述的方法。Wherein, the processor is configured to execute the method according to any one of claims 1 to 17 by executing the executable instructions.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110649875.4 | 2021-06-10 | ||
CN202110649875.4A CN113240053A (en) | 2021-06-10 | 2021-06-10 | Camera detection method and device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022257647A1 true WO2022257647A1 (en) | 2022-12-15 |
Family
ID=77139687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/090626 WO2022257647A1 (en) | 2021-06-10 | 2022-04-29 | Camera detection method and apparatus, storage medium, and electronic device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113240053A (en) |
WO (1) | WO2022257647A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320387A (en) * | 2023-04-06 | 2023-06-23 | 深圳博时特科技有限公司 | Camera module detection system and detection method |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240053A (en) * | 2021-06-10 | 2021-08-10 | Oppo广东移动通信有限公司 | Camera detection method and device, storage medium and electronic equipment |
CN114125806B (en) * | 2021-09-24 | 2022-08-23 | 浙江大学 | Wireless camera detection method based on cloud storage mode of wireless network flow |
CN113891067A (en) * | 2021-09-24 | 2022-01-04 | 浙江大学 | Wireless network camera positioning method and device, storage medium and electronic equipment |
CN114554187B (en) * | 2022-02-21 | 2024-08-27 | Oppo广东移动通信有限公司 | Detection method, device, equipment, medium and program product of wireless camera |
CN114567770A (en) * | 2022-02-21 | 2022-05-31 | Oppo广东移动通信有限公司 | Equipment identification method and related device |
CN114650416B (en) * | 2022-05-24 | 2022-08-30 | 江西火眼信息技术有限公司 | Hidden camera finding method based on Internet monitoring |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000020857A (en) * | 1998-07-06 | 2000-01-21 | Mega Chips Corp | Monitoring device |
CN108718257A (en) * | 2018-05-23 | 2018-10-30 | 浙江大学 | A kind of wireless camera detection and localization method based on network flow |
CN113038375A (en) * | 2021-03-24 | 2021-06-25 | 武汉大学 | Method and system for sensing and positioning hidden camera |
CN113240053A (en) * | 2021-06-10 | 2021-08-10 | Oppo广东移动通信有限公司 | Camera detection method and device, storage medium and electronic equipment |
CN114554187A (en) * | 2022-02-21 | 2022-05-27 | Oppo广东移动通信有限公司 | Wireless camera detection method, device, equipment, medium and program product |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180211404A1 (en) * | 2017-01-23 | 2018-07-26 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | 3d marker model construction and real-time tracking using monocular camera |
CN110223284B (en) * | 2019-06-11 | 2023-06-02 | 深圳市启芯众志科技有限公司 | Detection method and detection device for pinhole camera based on intelligent terminal |
US11589207B2 (en) * | 2019-09-27 | 2023-02-21 | Samsung Electronics Co., Ltd. | Electronic device for identifying external electronic device and method of operating same |
KR20210062579A (en) * | 2019-11-20 | 2021-05-31 | 한국전자기술연구원 | System and method for detecting hidden camera using wifi |
CN111132120B (en) * | 2020-04-01 | 2020-10-16 | 北京三快在线科技有限公司 | Method, system and equipment for identifying camera device in room local area network |
CN111479275B (en) * | 2020-04-13 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Method, device and equipment for detecting suspicious equipment and storage medium |
KR102204338B1 (en) * | 2020-07-28 | 2021-01-19 | (주)넷비젼텔레콤 | Wireless IP camera detection system |
-
2021
- 2021-06-10 CN CN202110649875.4A patent/CN113240053A/en active Pending
-
2022
- 2022-04-29 WO PCT/CN2022/090626 patent/WO2022257647A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000020857A (en) * | 1998-07-06 | 2000-01-21 | Mega Chips Corp | Monitoring device |
CN108718257A (en) * | 2018-05-23 | 2018-10-30 | 浙江大学 | A kind of wireless camera detection and localization method based on network flow |
CN113038375A (en) * | 2021-03-24 | 2021-06-25 | 武汉大学 | Method and system for sensing and positioning hidden camera |
CN113240053A (en) * | 2021-06-10 | 2021-08-10 | Oppo广东移动通信有限公司 | Camera detection method and device, storage medium and electronic equipment |
CN114554187A (en) * | 2022-02-21 | 2022-05-27 | Oppo广东移动通信有限公司 | Wireless camera detection method, device, equipment, medium and program product |
Non-Patent Citations (3)
Title |
---|
CHENG YUSHI; JI XIAOYU; LU TIANYANG; XU WENYUAN: "On Detecting Hidden Wireless Cameras: A Traffic Pattern-based Approach", IEEE TRANSACTIONS ON MOBILE COMPUTING, vol. 19, no. 4, 21 February 2019 (2019-02-21), US , pages 907 - 921, XP011776381, ISSN: 1536-1233, DOI: 10.1109/TMC.2019.2900919 * |
KIM JONG, AHN GAIL-JOON, KIM SEUNGJOO, KIM YONGDAE, LOPEZ JAVIER, KIM TAESOO, CHENG YUSHI, JI XIAOYU, LU TIANYANG, XU WENYUAN: "DeWiCam : Detecting Hidden Wireless Cameras via Smartphones", PROCEEDINGS OF THE 2018 ON ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY , ASIACCS '18; JUNE 4–8, 2018, 29 May 2018 (2018-05-29), New York, New York, USA , pages 1 - 13, XP055934901, ISBN: 978-1-4503-5576-6, DOI: 10.1145/3196494.3196509 * |
TIAN LIU ; ZIYU LIU ; JUN HUANG ; RUI TAN ; ZHEN TAN: "Detecting Wireless Spy Cameras Via Stimulating and Probing", MOBILE SYSTEMS, APPLICATIONS, AND SERVICES, 10 June 2018 (2018-06-10), pages 243 - 255, XP058411388, ISBN: 978-1-4503-5720-3, DOI: 10.1145/3210240.3210332 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320387A (en) * | 2023-04-06 | 2023-06-23 | 深圳博时特科技有限公司 | Camera module detection system and detection method |
CN116320387B (en) * | 2023-04-06 | 2023-09-29 | 深圳博时特科技有限公司 | Camera module detection system and detection method |
Also Published As
Publication number | Publication date |
---|---|
CN113240053A (en) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022257647A1 (en) | Camera detection method and apparatus, storage medium, and electronic device | |
CN112351322B (en) | Terminal device, method and system for realizing touch screen projection through remote controller | |
TW202105246A (en) | Face recognition method, electronic equipment and storage medium thereof | |
WO2021031609A1 (en) | Living body detection method and device, electronic apparatus and storage medium | |
CN112449332B (en) | Bluetooth connection method and electronic equipment | |
US20160007179A1 (en) | Fire alarm apparatus interworking with mobile phone | |
KR102389576B1 (en) | Apparatus and method for detecting counterfeit advertiser in wireless communication system | |
KR102390405B1 (en) | Doorbell | |
WO2021103423A1 (en) | Method and apparatus for detecting pedestrian events, electronic device and storage medium | |
US20090324211A1 (en) | Method and Device for Geo-Tagging an Object Before or After Creation | |
US20190364399A1 (en) | Control device and method | |
US11538276B2 (en) | Communication system, distributed processing system, distributed processing method, and recording medium | |
US11538316B2 (en) | Surveillance system and control method thereof | |
TW201501557A (en) | Internet protocol camera having network repeater function and configuration method thereof | |
US9167048B2 (en) | Method and apparatus for filtering devices within a security social network | |
WO2017181545A1 (en) | Object monitoring method and device | |
KR20130134585A (en) | Apparatus and method for sharing sensing information of portable device | |
CN110557740A (en) | Electronic equipment control method and electronic equipment | |
JP2017501598A (en) | Method and apparatus for broadcasting stream media data | |
CN115550986A (en) | Equipment detection method and electronic equipment | |
CN114063951A (en) | Screen projection abnormity processing method and electronic equipment | |
KR20150041939A (en) | A door monitoring system using real-time event detection and a method thereof | |
CN113838478B (en) | Abnormal event detection method and device and electronic equipment | |
CN113891067A (en) | Wireless network camera positioning method and device, storage medium and electronic equipment | |
KR101732382B1 (en) | The CPTED TOWER with CCTV for Crime Prevention and the CPTED system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22819249 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22819249 Country of ref document: EP Kind code of ref document: A1 |