WO2020057350A1 - 一种动态物体识别方法、装置及系统 - Google Patents

一种动态物体识别方法、装置及系统 Download PDF

Info

Publication number
WO2020057350A1
WO2020057350A1 PCT/CN2019/103772 CN2019103772W WO2020057350A1 WO 2020057350 A1 WO2020057350 A1 WO 2020057350A1 CN 2019103772 W CN2019103772 W CN 2019103772W WO 2020057350 A1 WO2020057350 A1 WO 2020057350A1
Authority
WO
WIPO (PCT)
Prior art keywords
dynamic object
recognition area
definition camera
server
dynamic
Prior art date
Application number
PCT/CN2019/103772
Other languages
English (en)
French (fr)
Inventor
管凌
许序标
Original Assignee
深圳市九洲电器有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市九洲电器有限公司 filed Critical 深圳市九洲电器有限公司
Publication of WO2020057350A1 publication Critical patent/WO2020057350A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the present invention relates to the field of intelligent security technology, and in particular, to a method, a device, and a system for dynamic object recognition.
  • Dynamic objects refer to objects whose movement state changes with time, and correspond to static objects.
  • dynamic object recognition technology has been widely used in various fields of people's death, such as medical images, visual reconstruction, autonomous navigation, and visual control.
  • embodiments of the present invention provide a method, a device, and a system for identifying a dynamic object, which solve the technical problem that it is difficult to obtain a clear image of a dynamic object under different behaviors of the current dynamic object, improve the recognition rate of the dynamic object, and realize Recognize dynamic objects.
  • the embodiments of the present invention aim to provide a method, a device and a system for identifying a dynamic object, which solve the technical problem that it is difficult to obtain a clear image of a dynamic object under different behaviors of the current dynamic object, improve the recognition rate of the dynamic object, and achieve better alignment. Recognize dynamic objects.
  • the embodiments of the present invention provide the following technical solutions:
  • an embodiment of the present invention provides a dynamic object recognition method, which is applied to a dynamic object recognition system.
  • the dynamic object recognition system includes a server and at least one high-definition camera.
  • Object identification the method includes:
  • a corresponding processing mode of the high-definition camera is determined, and the high-definition camera is controlled to recognize the dynamic object based on the processing mode.
  • the types of the dynamic objects include human bodies, animals, vehicles, and unknown objects.
  • the dynamic object database includes databases of different types of dynamic objects, and each type of the dynamic objects corresponds to a database.
  • the method of matching the corresponding dynamic object database in the server and determining the type of the dynamic object according to the real-time image data includes:
  • a database corresponding to the type of the dynamic object is determined according to the type of the dynamic object, wherein the human body corresponds to a human bank, the animal corresponds to an animal bank, the vehicle corresponds to a vehicle bank, and the unknown object corresponds to an unknown object Library.
  • the dynamic object database includes: a dynamic behavior model; and analyzing the behavior of the dynamic objects in the recognition area based on the dynamic object database in the server includes:
  • the behavior of the dynamic object is determined according to the dynamic behavior model.
  • the processing mode includes a tracking mode and a snapping mode. If the type of the dynamic object is a vehicle, the predetermined time is based on the behavior of the dynamic object to determine the corresponding HD camera.
  • a processing mode for controlling the high-definition camera to identify the dynamic object based on the processing mode including:
  • the processing mode is a snapping mode, and snapping the vehicle
  • the processing mode is a tracking mode, and the vehicle is tracked.
  • the recognition area is provided with a first recognition area and a second recognition area, and both the first recognition area and the second recognition area are provided with a high-definition camera, and the method includes:
  • the type of the dynamic object is a vehicle, obtaining a first speed, a first direction, and a first acceleration of the vehicle passing through the first recognition area;
  • the rotation direction and rotation speed of the high-definition camera in the second recognition area are determined according to the optimal capture angle, so that the high-definition camera in the second recognition area performs shooting based on the optimal capture angle.
  • the method further includes:
  • the method further includes:
  • the high-definition camera of the second recognition area is activated.
  • the dynamic object recognition system further includes: a mobile terminal, the mobile terminal is communicatively connected to the server, and the method further includes:
  • an embodiment of the present invention provides a dynamic object recognition device, where the device includes:
  • a receiving unit configured to receive real-time image data sent by the high-definition camera
  • a category determining unit configured to determine a category of a dynamic object according to the real-time image data
  • a matching unit configured to match the corresponding dynamic object database in the server according to the type of the dynamic object
  • a behavior analysis unit configured to analyze the behavior of the dynamic object based on the dynamic object database in the server
  • the recognition unit is configured to determine a corresponding processing mode of the high-definition camera based on the behavior of the dynamic object within a preset time, and control the high-definition camera to recognize the dynamic object based on the processing mode.
  • an embodiment of the present invention provides a dynamic object recognition system, including:
  • a server comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by all The at least one processor executes to enable the at least one processor to execute the dynamic object recognition method described above;
  • At least one high-definition camera each of which is connected to the server, and is configured to obtain image data or video data of the dynamic object;
  • the mobile terminal is communicatively connected to the server, and is configured to send a mode selection request to the server, and obtain image data or video data of the dynamic object.
  • an embodiment of the present invention further provides a non-volatile computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to enable a server to execute the foregoing.
  • the dynamic object recognition method is used to enable a server to execute the foregoing.
  • a beneficial effect of the embodiment of the present invention is that, in a case different from the prior art, a dynamic object recognition method provided by an embodiment of the present invention is applied to a dynamic object recognition system.
  • the dynamic object recognition system includes: a server, at least one A high-definition camera for identifying dynamic objects in a recognition area, the method comprising: receiving real-time image data in the recognition area sent by the high-definition camera; and determining the Category; matching the corresponding dynamic object database in the server according to the type of the dynamic object; analyzing the behavior of the dynamic object in the recognition area based on the dynamic object database in the server; within a preset time, based on The behavior of the dynamic object determines a corresponding processing mode of the high-definition camera, and controls the high-definition camera to recognize the dynamic object based on the processing mode.
  • the embodiments of the present invention can solve the technical problem that it is difficult to obtain clear images of dynamic objects under different behaviors of current objects, improve the recognition rate of dynamic objects, and achieve better recognition of dynamic
  • FIG. 1 is a schematic diagram of an application scenario of a dynamic object recognition method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a dynamic object recognition method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a dynamic object recognition device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a dynamic object recognition system according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • a plurality of high-definition cameras are set in a recognition area
  • the recognition area is an active area of a dynamic object
  • the recognition area may be a hotel, a highway intersection, a family, or a parking lot.
  • the HD cameras of the recognition area are distributed in different places, so that multiple HD cameras can cover the recognition area and obtain video data or image data of the recognition area .
  • the identification area may be a section of a highway intersection, or the identification area may be a section of a parking lot, or the identification area may be a section of a campus aisle, and so on.
  • the identification area includes: A recognition area and a second recognition area.
  • the starting movement position of the dynamic object is in the first recognition area, and the dynamic object moves in the first recognition area, or the A dynamic object moves from a first recognition area to the second recognition area, and the plurality of high-definition cameras can obtain video data or image data of the first recognition area and the second recognition area.
  • FIG. 2 is a schematic flowchart of a dynamic object recognition method according to an embodiment of the present invention.
  • the method is applied to a dynamic object recognition system.
  • the dynamic object recognition system includes a server and at least one high-definition camera.
  • the high-definition camera is used to identify a dynamic object in a recognition area.
  • Step S10 receiving real-time image data in the recognition area sent by the high-definition camera
  • a plurality of high-definition cameras are provided in the recognition area, and the high-definition cameras are configured to obtain real-time image data in the recognition area, and send the real-time image data to the server, and the server receives all The real-time image data in the recognition area sent by the high-definition camera is described.
  • the high-definition camera acquires image data in the recognition area based on a certain frequency, and sends the image data to the server.
  • the data transmission methods include:
  • the high-definition camera is connected to the server through a coaxial cable, and the analog signal is transmitted through the coaxial cable.
  • the server then converts the analog signal into a digital signal to generate real-time image data.
  • the high-definition camera is provided with a wireless communication module
  • the server is also provided with a wireless communication module.
  • the high-definition camera communicates with the server through a wireless transmission protocol to send the real-time video data to The server.
  • the wireless communication module may be a WIFI module or a Bluetooth module.
  • the real-time image obtained by the high-definition camera is carried on a high-frequency carrier wave by sampling FM modulation or amplitude modulation modulation, and is converted into high-frequency electromagnetic waves for transmission in the air to realize dynamic real-time image transmission.
  • the high-definition camera is connected to the server through a network cable, sampling a differential transmission method, and sending real-time image data to the server through the network cable.
  • Step S20 Match the corresponding dynamic object database in the server according to the real-time image data to determine the type of the dynamic object;
  • the features of the real-time image are identified by performing feature recognition on the real-time image, and based on the features, the characteristics of the dynamic object are determined. category.
  • image segmentation may be performed on the real-time image
  • the real-time image may be divided into image blocks of the same size, and feature recognition may be performed based on the image blocks to obtain the feature block.
  • the features in the real-time image determine the category of the dynamic object.
  • the feature information after the feature recognition is compared with a corresponding dynamic object database in the server, and if the comparison is successful, the type of the dynamic object is determined.
  • the types of the dynamic objects include: human bodies, animals, vehicles, and unknown objects.
  • image recognition is performed based on a deep convolutional neural network algorithm, and a dynamic object database in the server is determined in advance.
  • the dynamic object database stores characteristics and behaviors of different types of dynamic objects, where the types of the dynamic objects are Includes: humans, animals, vehicles, and unknown objects.
  • the dynamic object database includes a database of different types of dynamic objects, and each type of the dynamic object corresponds to a database, and according to the real-time image data, matching the corresponding dynamic object database in the server to determine the dynamic object's
  • the category includes: determining a database corresponding to the category of the dynamic object according to the category of the dynamic object, and matching the characteristics and behaviors of the dynamic object through the database corresponding to the category of the dynamic object, wherein the human body Corresponds to the human bank, the animal corresponds to the animal bank, the vehicle corresponds to the vehicle bank, and the unknown object corresponds to the unknown object bank.
  • Step S30 analyzing the behavior of the dynamic objects in the recognition area based on the dynamic object database in the server;
  • the dynamic object database includes: a dynamic behavior model; and analyzing the behavior of dynamic objects in the recognition area based on the dynamic object database in the server includes: determining the dynamic behavior model based on the dynamic behavior model.
  • the behavior of dynamic objects It can be understood that each type of object corresponds to a different dynamic behavior model, such as: a human, animal, vehicle, and unknown object respectively correspond to a dynamic behavior model, and all the dynamic behavior models are stored in the dynamic object database, And, the database corresponding to each type of dynamic objects matches the dynamic behavior model of the objects in the category, that is, the human library corresponds to the human behavior model, the animal library corresponds to the animal behavior model, the vehicle library corresponds to the vehicle behavior model, and the unknown object library corresponds to the unknown object behavior model.
  • Each dynamic behavior model corresponds to a variety of behaviors, for example, a human behavior model corresponds to a person's posture and motion state, and the person's posture includes: a standing posture, a sitting posture, a sleeping posture, and the like, and the motion state includes : Uniform motion, acceleration motion, deceleration motion, in-situ motion, and the like, wherein the acceleration motion includes uniform acceleration motion and variable acceleration motion, and the deceleration motion includes: uniform acceleration motion and variable deceleration motion.
  • the in-situ motion includes: hand motion, leg motion, waist motion, head motion, and so on.
  • the animal behavior model is similar to the human behavior model, and the vehicle behavior model includes: a motion state of a vehicle, the motion state includes: stationary, uniform motion, acceleration motion, and deceleration motion, wherein the acceleration motion includes: Uniform acceleration motion and variable acceleration motion, the deceleration motion includes: uniform acceleration motion and variable deceleration motion.
  • the variable acceleration motion refers to an acceleration motion whose acceleration is not constant, and the variable deceleration motion refers to a deceleration motion where the acceleration is not constant.
  • the behavior model of the unknown object includes: a moving direction of the unknown object and a real-time moving speed.
  • Step S40 within a preset time, based on the behavior of the dynamic object, determine a corresponding processing mode of the high-definition camera, and control the high-definition camera to recognize the dynamic object based on the processing mode.
  • the behavior of the dynamic object easily affects the sharpness of the image captured by the high-definition camera. If different behaviors still use the same mode for recognition, it may easily lead to insufficient sharpness of the captured image, affect recognition, and provide security. Monitoring brings hidden dangers.
  • the processing modes include a tracking mode and a snapshot mode. If the type of the dynamic object is a vehicle, the corresponding processing mode of the HD camera is determined within a preset time based on the behavior of the dynamic object. And controlling the high-definition camera to recognize the dynamic object based on the processing mode includes:
  • the processing mode is a snapping mode, and snapping the vehicle
  • the processing mode is a tracking mode, and the vehicle is tracked.
  • the preset time may be set manually or may be automatically determined by the server.
  • the preset time may be set to 5 seconds, 10 seconds, and 15 seconds.
  • the preset time cannot be too short or too long, and the behavior of the dynamic object can be recognized as soon as possible, so that the server can take corresponding measures as soon as possible, such as sending an alarm message to the mobile terminal. , And so on.
  • the snapshot mode refers to using the high-definition camera to freeze a moment of movement with a sufficiently fast shutter speed, thereby capturing an image of a dynamic object. For example: take-off dunks, kicks, and so on.
  • the tracking mode refers to that a dynamic object and the high-definition camera remain relatively still, so that the dynamic object is full of motion in real-time image data in the high-definition camera.
  • real-time video data can be acquired through multiple frames of real-time image data.
  • automatic tracking of dynamic objects is achieved, that is, automatic movement, zooming, and zooming according to the specific orientation of the dynamic objects.
  • the recognition area is provided with a first recognition area and a second recognition area, and both the first recognition area and the second recognition area are provided with a high-definition camera, and the method includes:
  • the type of the dynamic object is a vehicle, obtaining a first speed, a first direction, and a first acceleration of the vehicle passing through the first recognition area;
  • the server will use the real-time image data obtained by the high-definition camera. Since it takes a certain time for the vehicle to pass the first recognition area, the first A speed is obtained by calculating an average speed of the vehicle passing through the first recognition area, and determining the average speed of the vehicle passing through the first recognition area as the first speed.
  • the first direction For the average direction of the vehicle passing through the first identification area, the average direction may be determined by a line connecting the position where the vehicle enters the first identification area and the position where the vehicle leaves the first identification area, and is determined by A position point entering the first recognition area points to a position point leaving the first recognition area as the average direction, that is, the first direction.
  • the first acceleration is calculated by multi-frame real-time image data, that is, multiple speeds corresponding to multiple positions of the vehicle in the first recognition area are calculated, and the multiple speeds are calculated for acceleration.
  • the multiple positions refer to the two positions of the vehicle separated by a fixed time, for example, the difference between two adjacent speed values is separately calculated, combined with time, the multiple accelerations of the vehicle are calculated, averaged, and averaged. The value is used as the first acceleration.
  • calculating the second speed, the second direction, and the second acceleration of the vehicle passing through the second recognition area according to the first speed, the first direction, and the first acceleration includes:
  • the first direction be the second direction. It can be understood that if the first acceleration is zero, the second acceleration is also zero by default.
  • the first recognition area is a first position area where the dynamic object enters the recognition area
  • the second recognition area is a second position where the dynamic object enters after a threshold time in the recognition area Area
  • the first recognition area and the second recognition area are both located in the recognition area
  • the first recognition area and the second recognition area can both obtain real-time image data by the high-definition camera.
  • the positions of the first recognition area and the second recognition area are variable, and the first recognition area and the second recognition area can be determined according to the time, movement direction, and movement speed of the dynamic object in the recognition area. Determining the first recognition area and the second recognition area through a moving direction and a moving speed of the dynamic object is beneficial for better recognition of the dynamic object.
  • the method further includes: acquiring a speed, an acceleration, and a moving direction of the dynamic object entering the second recognition area;
  • the speed and acceleration of the dynamic object entering the second recognition area are calculated according to the speed, acceleration, and movement direction of the dynamic object in the first recognition area, for example, according to the dynamic object in the
  • the movement distance of the first recognition area determines the acceleration of the dynamic object
  • the acceleration is determined as the acceleration of the dynamic object entering the second recognition area
  • the moment when the dynamic object leaves the first recognition area Is determined as the speed at which the dynamic object enters the second recognition area
  • the rotation speed and acceleration of the high-definition camera are controlled so that the The high-definition camera in the second recognition area tracks the dynamic object in real time.
  • the method further includes: determining whether a dynamic object is recognized in the first recognition area according to real-time image data sent by a high-definition camera of the first recognition area; if yes, starting the first Two high-definition cameras for identification area.
  • the dynamic object moves from the first recognition area to the second recognition area by default, or the dynamic object moves only within the first recognition area.
  • the high-definition camera obtains real-time image data of the first recognition area
  • the service receives real-time image data in the first recognition area, and recognizes that there is a dynamic object in the real-time image data, it starts The high-definition camera in the second recognition area, or control a part of the high-definition camera in the recognition area to the second recognition area, so that the server can obtain real-time image data of the second recognition area.
  • the dynamic object recognition system further includes: a mobile terminal, the mobile terminal is communicatively connected to the server, and the method further includes:
  • the mode selection request includes a processing mode
  • the processing mode includes a capture mode and a tracking mode
  • the mobile terminal sends a mode selection request to the server by sending an instruction or a message
  • the server receives all After the mode selection request, the high-definition camera will be controlled to recognize the dynamic object based on the processing mode.
  • the method further includes: determining whether the dynamic object enters a recognition area or leaves the recognition area. Alternatively, it is determined whether a dynamic object appears in the recognition area. Or, it is determined whether illegal parking occurs in the recognition area, or whether artificial objects are placed or taken in the recognition area, or whether the human body lingers between the areas, or the recognition area Whether there is a crowd of people inside, and the number of people is counted by real-time image data.
  • the method further includes: acquiring, by the high-definition camera, image or video data of the recognition area in different periods.
  • the motion path of the dynamic object is determined by the motion of the dynamic object in the recognition area, and so on.
  • a dynamic object recognition method is provided and applied to a dynamic object recognition system.
  • the dynamic object recognition system includes a server and at least one high-definition camera, and the high-definition camera is used for detecting dynamics in a recognition area.
  • the method includes the following steps: receiving real-time image data in a recognition area sent by the high-definition camera; determining a type of a dynamic object according to the real-time image data; and matching the server with the type of the dynamic object.
  • a corresponding dynamic object database based on the dynamic object database in the server, analyzing the behavior of the dynamic objects in the recognition area; determining the corresponding processing of the HD camera based on the behavior of the dynamic objects within a preset time Mode to control the high-definition camera to recognize the dynamic object based on the processing mode.
  • FIG. 3 is a schematic flowchart of a dynamic object recognition device according to an embodiment of the present invention.
  • the dynamic object recognition device 100 is applied to a server, and the server is respectively connected to multiple HD cameras, and the multiple HD cameras are respectively disposed in a recognition area, such as a parking lot, and the dynamic object recognition
  • the device 100 includes:
  • a receiving unit 10 configured to receive real-time image data sent by the high-definition camera
  • a determining unit 20 configured to match a corresponding dynamic object database in the server according to the real-time image data to determine a category of the dynamic object
  • a behavior analysis unit 30 configured to analyze the behavior of the dynamic object based on the dynamic object database in the server;
  • the recognition unit 40 is configured to determine a processing mode corresponding to the high-definition camera based on the behavior of the dynamic object within a preset time, and control the high-definition camera to recognize the dynamic object based on the processing mode.
  • the device embodiment and the method embodiment are based on the same concept, as long as the contents do not conflict with each other, the content of the device embodiment may refer to the method embodiment, and details are not described herein.
  • FIG. 4 is a schematic structural diagram of a dynamic object recognition system according to an embodiment of the present invention.
  • the dynamic object recognition system 400 includes a server 410, a plurality of high-definition cameras 420, and a mobile terminal 430.
  • the plurality of high-definition cameras 420 are respectively connected to the server 410, and the mobile terminal 430 is communicatively connected to the server 410.
  • the server 410 is configured to receive a monitoring request sent by the mobile terminal 430 and an image sent by the high-definition camera 420. Please refer to FIG. 5.
  • FIG. 5 is a schematic structural diagram of a server according to an embodiment of the present invention. As shown in FIG. 5, the server 410 includes: one or more processors 411 and a memory 412. Among them, one processor 411 is taken as an example in FIG. 5.
  • the processor 411 and the memory 412 may be connected through a bus or in other manners.
  • the connection through the bus is taken as an example.
  • the memory 412 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. (E.g., each unit described in FIG. 3).
  • the processor 411 executes various functional applications and data processing of the dynamic object recognition method by running the non-volatile software programs, instructions, and modules stored in the memory 412, that is, the dynamic object recognition method and the above device of the method embodiment are implemented. The function of each module and unit of the embodiment.
  • the memory 412 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 412 may optionally include a memory remotely disposed with respect to the processor 411, and these remote memories may be connected to the processor 411 through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the module is stored in the memory 412, and when executed by the one or more processors 411, executes the dynamic object recognition method in any of the above method embodiments, for example, executes each of the above-mentioned FIG. 2 Steps; the functions of each module or unit described in FIG. 3 can also be implemented.
  • the server 410 in the embodiment of the present invention exists in various forms.
  • the server 410 includes, but is not limited to:
  • the general tower server case is similar to our commonly used PC case, while the large tower case is much thicker, and there is no fixed standard for the overall size.
  • the blade server is a low-cost server platform with high availability and high density (HAHD). It is specially designed for special application industries and high-density computer environments.
  • Each "blade" is actually a system motherboard. , Similar to individual servers. In this mode, each motherboard runs its own system and serves a specified different user group, without any association with each other. However, these motherboards can be grouped into a server cluster using system software. In the cluster mode, all motherboards can be connected to provide a high-speed network environment, which can share resources and serve the same user group.
  • the high-definition camera 420 is disposed in a recognition area, such as a parking lot, and is connected to the server 410.
  • the high-definition camera 420 is configured to obtain real-time image data of the recognition area and send the real-time image data to The server 410.
  • the multiple high-definition cameras 420 are respectively connected to the server 410, and the multiple high-definition cameras 420 are respectively disposed at different positions in the recognition area for The image acquisition is performed on dynamic objects in different regions in the recognition area, so that the server 410 can acquire video surveillance images in the recognition area in all directions.
  • the high-definition camera 420 is configured to obtain real-time image data in the first recognition area and the second recognition area, and is used to identify the type of the dynamic object.
  • the high-definition camera 420 may also receive commands sent by the server 410 to acquire the face image in real time, or, according to the commands sent by the server 410, adjust the rotation angle, rotation speed, and rotation of the high-definition camera 420 in real time. Acceleration to track the dynamic object.
  • the mobile terminal 430 is communicatively connected to the server 410, and is configured to send a mode selection request to the server 410, so that the server 410 controls the high-definition camera 420 based on the processing mode of the mode selection request.
  • the processing mode recognizes the dynamic object, and receives real-time image data or video data sent by the server 410.
  • the mobile terminal 430 includes, but is not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communication.
  • electronic devices include smartphones (such as the iPhone), multimedia phones, feature phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
  • electronic devices include: PDA, MID, and UMPC devices, such as the iPad.
  • Portable entertainment equipment This type of equipment can display and play video content, and generally has the characteristics of mobile Internet access. Such devices include: video players, handheld game consoles, as well as smart toys and portable car navigation devices.
  • An embodiment of the present invention also provides a non-volatile computer storage medium.
  • the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, such as a process in FIG. 5.
  • the processor 411 may cause the one or more processors to execute the drunk driving dynamic object recognition method in any of the foregoing method embodiments, for example, to execute the drunk driving dynamic object recognition method in any of the foregoing method embodiments, for example, to execute the diagram described above
  • Each step shown in 2; the functions of each unit described in FIG. 3 can also be realized.
  • the system includes: a server, the server includes: at least one processor; and a memory communicatively connected to the at least one processor;
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the dynamic object recognition method described above;
  • at least one high-definition camera Each of the high-definition cameras is connected to the server for obtaining image data or video data of the dynamic object;
  • a mobile terminal is communicatively connected to the server for sending a mode selection request to the server and acquires the Image data or video data of a dynamic object.
  • the embodiments of the device or device described above are only schematic, and the unit modules described as separate components may or may not be physically separated, and the components displayed as module units may or may not be physical units. , Can be located in one place, or can be distributed to multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明实施例涉及智能安防技术领域,公开了一种动态物体识别方法、装置及系统。其中所述动态物体识别方法包括:接收所述高清摄像头发送的识别区域内的实时图像数据;根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为;在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。通过上述方式,本发明实施例解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。

Description

一种动态物体识别方法、装置及系统 技术领域
本发明涉及智能安防技术领域,特别是涉及一种动态物体识别方法、装置及系统。
背景技术
动态物体,指的是运动状态随时间变化的物体,与静态物体相对应。随着计算机硬件和图像处理技术的发展,动态物体的识别技术已经广泛应用到了民生的各种领域,比如:医学图像、视觉重构、自主导航、视觉控制等领域。
目前,动态物体的识别一般通过自动跟踪的方式实现,通过控制摄像头的移动、变倍、变焦等方式实现对动态物体的跟踪拍摄,但是,由于动态物体的运动状态的差异,并且摄像头处于运动状态,往往难以拍摄到动态物体的清晰图像,从而无法实现对动态物体的识别,造成监控的安全隐患。
基于此,本发明实施例提供一种动态物体识别方法、装置及系统,解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。
发明内容
本发明实施例旨在提供一种动态物体识别方法、装置及系统,解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。
为解决上述技术问题,本发明实施例提供以下技术方案:
第一方面,本发明实施例提供一种动态物体识别方法,应用于动态物体识别系统,所述动态物体识别系统包括:服务器、至少一个高清摄像头,所述高清摄像头用于对识别区域内的动态物体进行识别,所述方法包括:
接收所述高清摄像头发送的识别区域内的实时图像数据;
根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;
基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的 行为;
在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
在一些实施例中,所述动态物体的类别包括:人体、动物、车辆以及不明物体,所述动态物体数据库包括不同类别的动态物体的数据库,每一所述动态物体的类别对应一数据库,所述根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别,包括:
根据所述动态物体的类别,确定与所述动态物体的类别对应的数据库,其中,所述人体对应人体库,所述动物对应动物库,所述车辆对应车辆库,所述不明物体对应不明物体库。
在一些实施例中,所述动态物体数据库包括:动态行为模型;所述基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为,包括:
根据所述动态行为模型,确定所述动态物体的行为。
在一些实施例中,所述处理模式包括:跟踪模式、抓拍模式,若所述动态物体的类别为车辆,所述在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别,包括:
若所述行为为匀速运动,则确定所述处理模式为抓拍模式,对所述车辆进行抓拍;
若所述行为为非匀速运动,则确定所述处理模式为跟踪模式,对所述车辆进行跟踪。
在一些实施例中,所述识别区域设置有第一识别区域以及第二识别区域,所述第一识别区域和第二识别区域均设置有高清摄像头,所述方法包括:
若通过所述第一识别区域确定所述动态物体的类别为车辆,获取所述车辆经过所述第一识别区域的第一速度、第一方向以及第一加速度;
根据所述第一速度、第一方向以及第一加速度,计算所述车辆经过所述第二识别区域的第二速度、第二方向以及第二加速度;
根据所述第二速度、第二方向以及第二加速度,确定所述车辆经过所述第二识别区域的最佳抓拍角度;
根据所述最佳抓拍角度,确定所述第二识别区域的高清摄像头的转动方向 以及转动速度,以使所述第二识别区域的高清摄像头基于所述最佳抓拍角度进行拍摄。
在一些实施例中,所述方法还包括:
获取所述动态物体进入所述第二识别区域的速度与加速度;
根据所述动态物体进入所述第二识别区域的速度与加速度,控制所述高清摄像头的转动速度和加速度,以使所述第二识别区域的高清摄像头对所述动态物体进行实时跟踪。
在一些实施例中,所述方法还包括:
根据所述第一识别区域的高清摄像头发送的实时图像数据,判断所述第一识别区域中是否识别到动态物体;
若是,则启动所述第二识别区域的高清摄像头。
在一些实施例中,所述动态物体识别系统还包括:移动终端,所述移动终端通信连接所述服务器,所述方法还包括:
接收所述移动终端发送的模式选择请求;
基于所述模式选择请求的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
第二方面,本发明实施例提供一种动态物体识别装置,所述装置包括:
接收单元,用于接收所述高清摄像头发送的实时图像数据;
类别确定单元,用于根据所述实时图像数据,确定动态物体的类别;
匹配单元,用于根据所述动态物体的类别,匹配所述服务器中对应的动态物体数据库;
行为分析单元,用于基于所述服务器中的动态物体数据库,分析所述动态物体的行为;
识别单元,用于在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
第三方面,本发明实施例提供一种一种动态物体识别系统,包括:
服务器,所述服务器包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够 执行上述的动态物体识别方法;
至少一个高清摄像头,每一所述高清摄像头均连接所述服务器,用于获取所述动态物体的图像数据或视频数据;
移动终端,通信连接所述服务器,用于向所述服务器发送模式选择请求,并获取所述动态物体的图像数据或视频数据。
第四方面,本发明实施例还提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使服务器能够执行如上所述的动态物体识别方法。
本发明实施例的有益效果是:区别于现有技术的情况下,本发明实施例提供的一种动态物体识别方法,应用于动态物体识别系统,所述动态物体识别系统包括:服务器、至少一个高清摄像头,所述高清摄像头用于对识别区域内的动态物体进行识别,所述方法包括:接收所述高清摄像头发送的识别区域内的实时图像数据;根据所述实时图像数据,确定动态物体的类别;根据所述动态物体的类别,匹配所述服务器中对应的动态物体数据库;基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为;在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。通过上述方式,本发明实施例能够解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明实施例提供的一种动态物体识别方法的应用场景的示意图;
图2是本发明实施例提供的一种动态物体识别方法的流程示意图;
图3是本发明实施例提供的一种动态物体识别装置的结构示意图;
图4是本发明实施例提供的一种动态物体识别系统的结构示意图;
图5是本发明实施例提供的一种服务器的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
在本发明的实施例中,如图1所示,在识别区域设置有多个高清摄像头,所述识别区域为动态物体的活动区域,所述识别区域可以为酒店、高速路口、家庭、停车场、展会、广场、餐厅、校园等地方,所述识别区域的高清摄像头分布于不同的地方,以使多个所述高清摄像头能够覆盖所述识别区域,获取所述识别区域的视频数据或图像数据。其中,所述识别区域可以为高速路口的一段,或者,所述识别区域可以为停车场的一段,或者,所述识别区域可以是校园过道的一段,以及等等,所述识别区域包括:第一识别区域和第二识别区域,在本发明实施例中,默认所述动态物体的起始运动位置在第一识别区域,所述动态物体在所述第一识别区域内运动,或者,所述动态物体从第一识别区域运动到所述第二识别区域,所述多个高清摄像头可以获取第一识别区域和所述第二识别区域的视频数据或图像数据。
实施例一
请参阅图2,图2是本发明实施例提供的一种动态物体识别方法的流程示意图;
如图2所示,所述方法应用于动态物体识别系统,所述动态物体识别系统包括:服务器、至少一个高清摄像头,所述高清摄像头用于对识别区域内的动态物体进行识别,所述方法包括:
步骤S10:接收所述高清摄像头发送的识别区域内的实时图像数据;
具体的,所述识别区域内设置有多个高清摄像头,所述高清摄像头用于获取所述识别区域内的实时图像数据,并将所述实时图像数据发送到所述服务器,所述服务器接收所述高清摄像头发送的识别区域内的实时图像数据。具体的,所述高清摄像头基于一定的频率获取所述识别区域内的图像数据,并将所述图 像数据发送到所述服务器。其中,所述数据的传输方式包括:
(1)基带传输,所述高清摄像头通过同轴电缆连接所述服务器,通过同轴电缆传输模拟信号,所述服务器再将所述模拟信号转换为数字信号,进而生成实时图像数据。
(2)光纤传输,所述高清摄像头与所述服务器通过光纤进行连接,将所述实时图像数据以光信号的方式在光纤中传输。
(3)无线网络传输,所述高清摄像头设置有无线通信模块,所述服务器也设置有无线通信模块,所述高清摄像头与所述服务器通过无线传输协议进行通信,将所述实时视频数据发送到所述服务器。其中,所述无线通信模块可以为WIFI模块或蓝牙模块。
(4)微波传输,通过采样调频调制或调幅调制的办法,将所述高清摄像头获取的实时图像搭载到高频载波上,转换为高频电磁波在空中传输,实现动态实时传输图像。
(5)网线传输,所述高清摄像头通过网线连接所述服务器,采样差分传输方法,通过所述网线向所述服务器发送实时图像数据。
步骤S20:根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;
其中,当所述高清摄像头获取到所述识别区域内的实时图像数据后,通过对所述实时图像进行特征识别,识别出所述实时图像的特征,根据所述特征,确定所述动态物体的类别。其中,在对所述实时图像进行特征识别时,还可以对所述实时图像进行图像分块,将所述实时图像划分成大小相同的图像块,基于所述图像块进行特征识别,获取所述实时图像中的特征,确定所述动态物体的类别。具体的,将所述特征识别后的特征信息比对所述服务器中对应的动态物体数据库,若比对成功,则确定所述动态物体的类别。其中,所述动态物体的类别包括:人体、动物、车辆以及不明物体。
具体的,基于深度卷积神经网络算法进行图像识别,预先确定所述服务器中的动态物体数据库,所述动态物体数据库中保存不同类别的动态物体的特征以及行为,其中,所述动态物体的类别包括:人体、动物、车辆以及不明物体。所述动态物体数据库包括不同类别的动态物体的数据库,每一所述动态物体的类别对应一数据库,所述根据所述实时图像数据,匹配所述服务器中对应的动 态物体数据库,确定动态物体的类别,包括:根据所述动态物体的类别,确定与所述动态物体的类别对应的数据库,通过所述动态物体的类别对应的数据库,匹配所述动态物体的特征及行为,其中,所述人体对应人体库,所述动物对应动物库,所述车辆对应车辆库,所述不明物体对应不明物体库。
步骤S30:基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为;
具体的,所述动态物体数据库包括:动态行为模型;所述基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为,包括:根据所述动态行为模型,确定所述动态物体的行为。可以理解的是,每一类别的物体分别对应不同的动态行为模型,比如:人体、动物、车辆、不明物体分别对应一动态行为模型,所有的动态行为模型均保存在所述动态物体数据库中,并且,每一动态物体的类别对应的数据库匹配该类别的物体的动态行为模型,即人体库对应人体行为模型,动物库对应动物行为模型,车辆库对应车辆行为模型,不明物体库对应不明物体行为模型。而每一动态行为模型均对应有多种行为,例如:人体行为模型对应人的姿态及运动状态,所述人的姿态包括:站姿、坐姿、睡姿,以及等等,所述运动状态包括:匀速运动、加速运动、减速运动、原地运动,以及等等,其中,所述加速运动包括:匀加速运动以及变加速运动,所述减速运动包括:匀加速运动以及变减速运动。所述原地运动包括:手部运动、腿部运动、腰部运动、头部运动,以及等等。其中,所述动物行为模型与人体行为模型类似,所述车辆行为模型包括:车辆的运动状态,所述运动状态包括:静止、匀速运动、加速运动以及减速运动,其中,所述加速运动包括:匀加速运动以及变加速运动,所述减速运动包括:匀加速运动以及变减速运动。所述变加速运动指的的加速度不固定的加速运动,所述变减速运动指的是加速度不固定的减速运动。其中,所述不明物体行为模型包括:不明物体的运动方向及实时运动速度。
通过获取所述高清摄像头发送的识别区域内的实时图像数据,根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;基于所述服务器中的动态物体数据库,确定与所述动态物体的类别对应的动态行为模型,根据所述动态行为模型,分析所述识别区域内的动态物体的行为,有利于快速确定所述动态物体的行为。
步骤S40:在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
具体的,所述动态物体的行为容易影响所述高清摄像头拍摄的图像的清晰度,如果不同的行为下,仍然采用相同的模式进行识别,容易导致拍摄的图像清晰度不足,影响识别,给安防监控带来隐患。
具体的,所述处理模式包括:跟踪模式、抓拍模式,若所述动态物体的类别为车辆,所述在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别,包括:
若所述行为为匀速运动,则确定所述处理模式为抓拍模式,对所述车辆进行抓拍;
若所述行为为非匀速运动,则确定所述处理模式为跟踪模式,对所述车辆进行跟踪。
具体的,所述预设时间可以人为设置,也可以通过所述服务器自动确定。例如:所述预设时间可以设置为5秒、10秒、15秒。为了提高识别率,所述预设时间不能过短或过长,也能尽快识别出所述动态物体的行为,以使所述服务器尽快采取相应的措施,比如:向所述移动终端发送报警信息,以及等等。
具体的,所述抓拍模式指的是通过所述高清摄像头,用足够快的快门速度凝固运动瞬间,进而捕捉动态物体的图像。例如:起跳扣篮、临门一脚,以及等等。
具体的,所述跟踪模式指的是动态物体和所述高清摄像头保持相对静止,从而使动态物体在所述高清摄像头中的实时图像数据中动感十足。并且,通过多帧实时图像数据,能够获取实时视频数据。具体的,通过改变所述高清摄像头的拍摄角度、焦距等,实现对动态物体进行自动追踪,即根据动态物体的具体方位自动移动、变倍、变焦。
具体的,所述识别区域设置有第一识别区域以及第二识别区域,所述第一识别区域和第二识别区域均设置有高清摄像头,所述方法包括:
若通过所述第一识别区域确定所述动态物体的类别为车辆,获取所述车辆经过所述第一识别区域的第一速度、第一方向以及第一加速度;
根据所述第一速度、第一方向以及第一加速度,计算所述车辆经过所述第二识别区域的第二速度、第二方向以及第二加速度;
具体的,所述车辆在经过所述第一识别区域后,所述服务器将根据所述高清摄像头获取的实时图像数据,由于所述车辆经过所述第一识别区域需要一定时间,因此所述第一速度通过计算所述车辆经过所述第一识别区域的平均速度得出,将所述车辆经过所述第一识别区域的平均速度确定为所述第一速度,同理,所述第一方向为所述车辆经过所述第一识别区域的平均方向,所述平均方向可以由所述车辆进入所述第一识别区域和所述车辆离开所述第一识别区域的位置的连线确定,由进入所述第一识别区域的位置点指向离开所述第一识别区域的位置点,作为所述平均方向,即所述第一方向。其中,所述第一加速度通过多帧实时图像数据计算得到,即计算所述车辆在所述第一识别区域内的多个位置对应的多个速度,对所述多个速度进行求加速度,所述多个位置指的是相隔固定时间所述车辆的两个位置,比如:分别对相邻的两个速度值进行求差,结合时间,计算所述车辆的多个加速度,求平均,将平均值作为所述第一加速度。
其中,根据所述第一速度、第一方向以及第一加速度,计算所述车辆经过所述第二识别区域的第二速度、第二方向以及第二加速度,包括:
通过所述第一速度以及第一加速度,结合所述第一识别区域与所述第二识别区域的距离,确定所述第二速度,同时,将所述第一加速度作为所述第二加速度,将所述第一方向作为所述第二方向。可以理解的是,若所述第一加速度为零,则所述第二加速度也默认为零。
根据所述第二速度、第二方向以及第二加速度,确定所述车辆经过所述第二识别区域的最佳抓拍角度;根据所述最佳抓拍角度,确定所述第二识别区域的高清摄像头的转动方向以及转动速度,以使所述第二识别区域的高清摄像头基于所述最佳抓拍角度进行拍摄。
可以理解的是,所述第一识别区域是所述动态物体进入所述识别区域的第一位置区域,所述第二识别区域是识别区域内所述动态物体经过阈值时间后进入的第二位置区域,所述第一识别区域和第二识别区域均位于所述识别区域,并且所述第一识别区域和所述第二识别区域均可以被所述高清摄像头获取实时图像数据。其中,所述第一识别区域和第二识别区域的位置可变,所述第一识 别区域和第二识别区域可以根据所述动态物体在所述识别区域的时间、运动方向以及运动速度确定。通过所述动态物体的运动方向以及运动速度,确定所述第一识别区域和第二识别区域,有利于对所述动态物体进行更好的识别。
在本发明实施例中,所述方法还包括:获取所述动态物体进入所述第二识别区域的速度、加速度及运动方向;
根据所述动态物体进入所述第二识别区域的速度、加速度及运动方向,控制所述高清摄像头的转动速度和加速度,以使所述第二识别区域的高清摄像头对所述动态物体进行实时跟踪。
具体的,根据所述动态物体在所述第一识别区域的速度、加速度及运动方向,计算所述动态物体进入所述第二识别区域的速度以及加速度,例如:根据所述动态物体在所述第一识别区域的运动距离,确定所述动态物体的加速度,将所述加速度确定为所述动态物体进入所述第二识别区域的加速度,将所述动态物体离开所述第一识别区域的瞬间的速度确定为所述动态物体进入所述第二识别区域的速度,根据所述动态物体进入所述第二识别区域的速度与加速度,控制所述高清摄像头的转动速度和加速度,以使所述第二识别区域的高清摄像头对所述动态物体进行实时跟踪。
在本发明实施例中,所述方法还包括:根据所述第一识别区域的高清摄像头发送的实时图像数据,判断所述第一识别区域中是否识别到动态物体;若是,则启动所述第二识别区域的高清摄像头。
具体的,所述动态物体默认为从所述第一识别区域运动到所述第二识别区域,或者,所述动态物体只在所述第一识别区域内进行运动。当所述高清摄像头获取到所述第一识别区域的实时图像数据,并且所述服务接收到所述第一识别区域内的实时图像数据后,识别到所述实时图像数据存在动态物体,则启动所述第二识别区域的高清摄像头,或者,控制所述识别区域内的部分高清摄像头转到所述第二识别区域,以使所述服务器能够获取所述第二识别区域的实时图像数据。
在本发明实施例中,所述动态物体识别系统还包括:移动终端,所述移动终端通信连接所述服务器,所述方法还包括:
接收所述移动终端发送的模式选择请求;
基于所述模式选择请求的处理模式,控制所述高清摄像头基于所述处理模 式对所述动态物体进行识别。
具体的,所述模式选择请求包括:处理模式,所述处理模式包括抓拍模式和跟踪模式,所述移动终端通过发送指令或消息的方式向所述服务器发送模式选择请求,所述服务器接收到所述模式选择请求后,将控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
在本发明实施例中,所述方法还包括:判断所述动态物体是否进入识别区域或离开所述识别区域。或者,判断所述识别区域内是否出现动态物体。或者,判断所述识别区域内是否出现违章停车,或者,判断所述识别区域内是否出现人为物品放置或拿取,或者,判断所述人体是否在区域之间徘徊,或者,判断所述识别区域内是否出现人员聚集,并通过实时图像数据进行人数统计。
在本发明实施例中,所述方法还包括:通过所述高清摄像头分时段获取所述识别区域的图像或视频数据。通过所述动态物体在所述识别区域的运动,确定所述动态物体的运动路径,以及等等。
在本发明实施例中,通过提供一种动态物体识别方法,应用于动态物体识别系统,所述动态物体识别系统包括:服务器、至少一个高清摄像头,所述高清摄像头用于对识别区域内的动态物体进行识别,所述方法包括:接收所述高清摄像头发送的识别区域内的实时图像数据;根据所述实时图像数据,确定动态物体的类别;根据所述动态物体的类别,匹配所述服务器中对应的动态物体数据库;基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为;在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。通过上述方式,本发明实施例能够解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。
实施例二
请参阅图3,图3是本发明实施例提供的一种动态物体识别装置的流程示意图;
如图3所示,该动态物体识别装置100,应用于服务器,所述服务器与多个高清摄像头分别连接,所述多个高清摄像头分别设置于识别区域,比如:停车 场,所述动态物体识别装置100包括:
接收单元10,用于接收所述高清摄像头发送的实时图像数据;
确定单元20,用于根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;
行为分析单元30,用于基于所述服务器中的动态物体数据库,分析所述动态物体的行为;
识别单元40,用于在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
由于装置实施例和方法实施例是基于同一构思,在内容不互相冲突的前提下,装置实施例的内容可以引用方法实施例的,在此不赘述。
请参阅图4,图4为本发明实施例提供的一种动态物体识别系统的结构示意图,如图4所示,该动态物体识别系统400包括:服务器410、多个高清摄像头420以及移动终端430,所述多个高清摄像头420分别连接所述服务器410,所述移动终端430通信连接所述服务器410。
其中,所述服务器410,用于接收所述移动终端430发送的监控请求以及接收所述高清摄像头420发送的图像,请参阅图5,图5是本发明实施例提供的一种服务器的结构示意图,如图5所示,所述服务器410包括:一个或多个处理器411以及存储器412。其中,图5中以一个处理器411为例。
处理器411和存储器412可以通过总线或者其他方式连接,图5中以通过总线连接为例。
存储器412作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的一种动态物体识别方法对应的单元(例如,图3所述的各个单元)。处理器411通过运行存储在存储器412中的非易失性软件程序、指令以及模块,从而执行动态物体识别方法的各种功能应用以及数据处理,即实现上述方法实施例动态物体识别方法以及上述装置实施例的各个模块和单元的功能。
存储器412可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器412可选包括相对于处理器411远程设置的存储器,这些 远程存储器可以通过网络连接至处理器411。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述模块存储在所述存储器412中,当被所述一个或者多个处理器411执行时,执行上述任意方法实施例中的动态物体识别方法,例如,执行以上描述的图2所示的各个步骤;也可实现图3所述的各个模块或单元的功能。
本发明实施例的服务器410以多种形式存在,在执行以上描述的图2所示的各个步骤;也可实现图3所述的各个单元的功能时,上述服务器410包括但不限于:
(1)塔式服务器
一般的塔式服务器机箱和我们常用的PC机箱差不多,而大型的塔式机箱就要粗大很多,总的来说外形尺寸没有固定标准。
(2)机架式服务器
机架式服务器是由于满足企业的密集部署,形成的以19英寸机架作为标准宽度的服务器类型,高度则从1U到数U。将服务器放置到机架上,并不仅仅有利于日常的维护及管理,也可能避免意想不到的故障。首先,放置服务器不占用过多空间。机架服务器整齐地排放在机架中,不会浪费空间。其次,连接线等也能够整齐地收放到机架里。电源线和LAN线等全都能在机柜中布好线,可以减少堆积在地面上的连接线,从而防止脚踢掉电线等事故的发生。规定的尺寸是服务器的宽(48.26cm=19英寸)与高(4.445cm的倍数)。由于宽为19英寸,所以有时也将满足这一规定的机架称为“19英寸机架”。
(3)刀片式服务器
刀片服务器是一种HAHD(High Availability High Density,高可用高密度)的低成本服务器平台,是专门为特殊应用行业和高密度计算机环境设计的,其中每一块“刀片”实际上就是一块系统母板,类似于一个个独立的服务器。在这种模式下,每一个母板运行自己的系统,服务于指定的不同用户群,相互之间没有关联。不过可以使用系统软件将这些母板集合成一个服务器集群。在集群模式下,所有的母板可以连接起来提供高速的网络环境,可以共享资源,为相同的用户群服务。
其中,所述高清摄像头420,设置于识别区域,比如:停车场,连接所述服务器410,所述高清摄像头420用于获取所述识别区域的实时图像数据,并将所 述实时图像数据发送到所述服务器410。在本发明实施例中,所述高清摄像头420为多个,所述多个高清摄像头420分别连接服务器410,所述多个高清摄像头420分别设置于所述识别区域的不同位置,用于对所述识别区域内的不同区域的动态物体进行图像获取,以使所述服务器410能够全方位地获取所述识别区域内的视频监控图像。可以理解的是,所述高清摄像头420用于获取第一识别区域和第二识别区域内的实时图像数据,用于识别所述动态物体的类别。所述高清摄像头420还可以接收所述服务器410发送的命令,实时获取所述人脸图像,或者,根据所述服务器410发送的命令,实时调整所述高清摄像头420的转动角度、转动速度以及转动加速度,以实现对所述动态物体的跟踪。
其中,所述移动终端430,通信连接所述服务器410,用于向所述服务器410发送模式选择请求,以使所述服务器410基于所述模式选择请求的处理模式,控制所述高清摄像头420基于所述处理模式对所述动态物体进行识别,并且,接收所述服务器410发送的实时图像数据或视频数据。
在本发明实施例中,所述移动终端430包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类电子设备包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类电子设备包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放视频内容,一般也具备移动上网特性。该类设备包括:视频播放器,掌上游戏机,以及智能玩具和便携式车载导航设备。
(4)其他具有视频播放功能和上网功能的电子设备。
本发明实施例还提供了一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图5中的一个处理器411,可使得上述一个或多个处理器可执行上述任意方法实施例中的酒驾动态物体识别方法,例如,执行上述任意方法实施例中的酒驾动态物体识别方法,例如,执行以上描述的图2所示的各个步骤;也可实现图3所述的各个单元的功能。
在本发明实施例中,通过提供一种动态物体识别系统,所述系统包括:服务器,所述服务器包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的动态物体识别方法;至少一个高清摄像头,每一所述高清摄像头均连接所述服务器,用于获取所述动态物体的图像数据或视频数据;移动终端,通信连接所述服务器,用于向所述服务器发送模式选择请求,并获取所述动态物体的图像数据或视频数据。通过上述方式,本发明实施例能够解决目前动态物体在不同行为下,难以获取清晰的动态物体图像的技术问题,提高动态物体的识别率,实现更好地对动态物体进行识别。
以上所描述的装置或设备实施例仅仅是示意性的,其中所述作为分离部件说明的单元模块可以是或者也可以不是物理上分开的,作为模块单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络模块单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用直至得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种动态物体识别方法,应用于动态物体识别系统,所述动态物体识别系统包括:服务器、至少一个高清摄像头,所述高清摄像头用于对识别区域内的动态物体进行识别,其特征在于,所述方法包括:
    接收所述高清摄像头发送的识别区域内的实时图像数据;
    根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;
    基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为;
    在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
  2. 根据权利要求1所述的方法,其特征在于,所述动态物体的类别包括:人体、动物、车辆以及不明物体,所述动态物体数据库包括不同类别的动态物体的数据库,每一所述动态物体的类别对应一数据库,所述根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别,包括:
    根据所述动态物体的类别,确定与所述动态物体的类别对应的数据库,其中,所述人体对应人体库,所述动物对应动物库,所述车辆对应车辆库,所述不明物体对应不明物体库。
  3. 根据权利要求1所述的方法,其特征在于,所述动态物体数据库包括:动态行为模型;所述基于所述服务器中的动态物体数据库,分析所述识别区域内的动态物体的行为,包括:
    根据所述动态行为模型,确定所述动态物体的行为。
  4. 根据权利要求1所述的方法,其特征在于,所述处理模式包括:跟踪模式、抓拍模式,若所述动态物体的类别为车辆,所述在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别,包括:
    若所述行为为匀速运动,则确定所述处理模式为抓拍模式,对所述车辆进行抓拍;
    若所述行为为非匀速运动,则确定所述处理模式为跟踪模式,对所述车辆进行跟踪。
  5. 根据权利要求4所述的方法,其特征在于,所述识别区域设置有第一识别区域以及第二识别区域,所述第一识别区域和第二识别区域均设置有高清摄像头,所述方法包括:
    若通过所述第一识别区域确定所述动态物体的类别为车辆,获取所述车辆经过所述第一识别区域的第一速度、第一方向以及第一加速度;
    根据所述第一速度、第一方向以及第一加速度,计算所述车辆经过所述第二识别区域的第二速度、第二方向以及第二加速度;
    根据所述第二速度、第二方向以及第二加速度,确定所述车辆经过所述第二识别区域的最佳抓拍角度;
    根据所述最佳抓拍角度,确定所述第二识别区域的高清摄像头的转动方向以及转动速度,以使所述第二识别区域的高清摄像头基于所述最佳抓拍角度进行拍摄。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    获取所述动态物体进入所述第二识别区域的速度、加速度及运动方向;
    根据所述动态物体进入所述第二识别区域的速度、加速度及运动方向,控制所述高清摄像头的转动速度和加速度,以使所述第二识别区域的高清摄像头对所述动态物体进行实时跟踪。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    根据所述第一识别区域的高清摄像头发送的实时图像数据,判断所述第一识别区域中是否识别到动态物体;
    若是,则启动所述第二识别区域的高清摄像头。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述动态物体识别 系统还包括:移动终端,所述移动终端通信连接所述服务器,所述方法还包括:
    接收所述移动终端发送的模式选择请求;
    基于所述模式选择请求的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
  9. 一种动态物体识别装置,其特征在于,所述装置包括:
    接收单元,用于接收所述高清摄像头发送的实时图像数据;
    确定单元,用于根据所述实时图像数据,匹配所述服务器中对应的动态物体数据库,确定动态物体的类别;
    行为分析单元,用于基于所述服务器中的动态物体数据库,分析所述动态物体的行为;
    识别单元,用于在预设时间内,基于所述动态物体的行为,确定所述高清摄像头相应的处理模式,控制所述高清摄像头基于所述处理模式对所述动态物体进行识别。
  10. 一种动态物体识别系统,其特征在于,包括:
    服务器,所述服务器包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8任一项所述的方法;
    至少一个高清摄像头,每一所述高清摄像头均连接所述服务器,用于获取所述动态物体的图像数据或视频数据;
    移动终端,通信连接所述服务器,用于向所述服务器发送模式选择请求,并获取所述动态物体的图像数据或视频数据。
PCT/CN2019/103772 2018-09-21 2019-08-30 一种动态物体识别方法、装置及系统 WO2020057350A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811110382.8 2018-09-21
CN201811110382.8A CN109284715B (zh) 2018-09-21 2018-09-21 一种动态物体识别方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2020057350A1 true WO2020057350A1 (zh) 2020-03-26

Family

ID=65182075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103772 WO2020057350A1 (zh) 2018-09-21 2019-08-30 一种动态物体识别方法、装置及系统

Country Status (2)

Country Link
CN (1) CN109284715B (zh)
WO (1) WO2020057350A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582112A (zh) * 2020-04-29 2020-08-25 重庆工程职业技术学院 一种针对密集人群进行异常人员筛查的工作设备和工作方法
CN111800590A (zh) * 2020-07-06 2020-10-20 深圳博为教育科技有限公司 一种导播控制方法、装置、系统及控制主机

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284715B (zh) * 2018-09-21 2021-03-02 深圳市九洲电器有限公司 一种动态物体识别方法、装置及系统
CN111881745A (zh) * 2020-06-23 2020-11-03 无锡北斗星通信息科技有限公司 基于大数据存储的满载检测系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006017676A (ja) * 2004-07-05 2006-01-19 Sumitomo Electric Ind Ltd 計測システムおよび計測方法
CN101465033A (zh) * 2008-05-28 2009-06-24 丁国锋 一种自动追踪识别系统及方法
CN104853104A (zh) * 2015-06-01 2015-08-19 深圳市微队信息技术有限公司 一种自动跟踪拍摄运动目标的方法以及系统
CN105138126A (zh) * 2015-08-26 2015-12-09 小米科技有限责任公司 无人机的拍摄控制方法及装置、电子设备
CN109284715A (zh) * 2018-09-21 2019-01-29 深圳市九洲电器有限公司 一种动态物体识别方法、装置及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090002140A (ko) * 2007-06-19 2009-01-09 한국전자통신연구원 행위분석에 의한 정보흐름 파악 및 정보유출 탐지 방법
CN101547344B (zh) * 2009-04-24 2010-09-01 清华大学深圳研究生院 基于联动摄像机的视频监控装置及其跟踪记录方法
CN201830388U (zh) * 2010-10-13 2011-05-11 成都创烨科技有限责任公司 一种视频内容采集及处理装置
CN202948559U (zh) * 2012-07-31 2013-05-22 株洲南车时代电气股份有限公司 一种视频与雷达检测的冗余热备卡口系统
JP5868816B2 (ja) * 2012-09-26 2016-02-24 楽天株式会社 画像処理装置、画像処理方法、及びプログラム
CN102945603B (zh) * 2012-10-26 2015-06-03 青岛海信网络科技股份有限公司 检测交通事件的方法及电子警察装置
CN103354029A (zh) * 2013-07-26 2013-10-16 安徽三联交通应用技术股份有限公司 一种多功能路口交通信息采集方法
CN106558224B (zh) * 2015-09-30 2019-08-02 徐贵力 一种基于计算机视觉的交通智能监管方法
CN105427619B (zh) * 2015-12-24 2017-06-23 上海新中新猎豹交通科技股份有限公司 车辆跟车距离自动记录系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006017676A (ja) * 2004-07-05 2006-01-19 Sumitomo Electric Ind Ltd 計測システムおよび計測方法
CN101465033A (zh) * 2008-05-28 2009-06-24 丁国锋 一种自动追踪识别系统及方法
CN104853104A (zh) * 2015-06-01 2015-08-19 深圳市微队信息技术有限公司 一种自动跟踪拍摄运动目标的方法以及系统
CN105138126A (zh) * 2015-08-26 2015-12-09 小米科技有限责任公司 无人机的拍摄控制方法及装置、电子设备
CN109284715A (zh) * 2018-09-21 2019-01-29 深圳市九洲电器有限公司 一种动态物体识别方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582112A (zh) * 2020-04-29 2020-08-25 重庆工程职业技术学院 一种针对密集人群进行异常人员筛查的工作设备和工作方法
CN111800590A (zh) * 2020-07-06 2020-10-20 深圳博为教育科技有限公司 一种导播控制方法、装置、系统及控制主机

Also Published As

Publication number Publication date
CN109284715B (zh) 2021-03-02
CN109284715A (zh) 2019-01-29

Similar Documents

Publication Publication Date Title
WO2020057350A1 (zh) 一种动态物体识别方法、装置及系统
US11004209B2 (en) Methods and systems for applying complex object detection in a video analytics system
US11423653B2 (en) Systems and methods for generating media content
US11045705B2 (en) Methods and systems for 3D ball trajectory reconstruction
US10282617B2 (en) Methods and systems for performing sleeping object detection and tracking in video analytics
US20190034734A1 (en) Object classification using machine learning and object tracking
US9934823B1 (en) Direction indicators for panoramic images
US10269135B2 (en) Methods and systems for performing sleeping object detection in video analytics
US10140718B2 (en) Methods and systems of maintaining object trackers in video analytics
US10152630B2 (en) Methods and systems of performing blob filtering in video analytics
US20180047193A1 (en) Adaptive bounding box merge method in blob analysis for video analytics
US9824723B1 (en) Direction indicators for panoramic images
US20170262706A1 (en) Smart tracking video recorder
US10115005B2 (en) Methods and systems of updating motion models for object trackers in video analytics
US20200005025A1 (en) Method, apparatus, device and system for processing commodity identification and storage medium
CN113596158A (zh) 一种基于场景的算法配置方法和装置
WO2021013187A1 (zh) 无人飞行器寻找信息生成方法及无人飞行器
CN107979731B (zh) 一种获取音视频数据的方法、装置及系统
US10026193B2 (en) Methods and systems of determining costs for object tracking in video analytics
WO2021192811A1 (en) A method and an apparatus for estimating an appearance of a first target
CN112597910B (zh) 利用扫地机器人对人物活动进行监控的方法和装置
US20230162375A1 (en) Method and system for improving target detection performance through dynamic learning
WO2021140966A1 (en) Method, apparatus and non-transitory computer readable medium
CN111800590B (zh) 一种导播控制方法、装置、系统及控制主机
JP7480841B2 (ja) イベントの管理方法、イベント管理装置、システム及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19863673

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 10.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19863673

Country of ref document: EP

Kind code of ref document: A1