CN114827436A - Camera shooting method and device - Google Patents

Camera shooting method and device Download PDF

Info

Publication number
CN114827436A
CN114827436A CN202110120236.9A CN202110120236A CN114827436A CN 114827436 A CN114827436 A CN 114827436A CN 202110120236 A CN202110120236 A CN 202110120236A CN 114827436 A CN114827436 A CN 114827436A
Authority
CN
China
Prior art keywords
scene
service
time period
time
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110120236.9A
Other languages
Chinese (zh)
Inventor
滕铮浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110120236.9A priority Critical patent/CN114827436A/en
Priority to PCT/CN2021/142238 priority patent/WO2022161080A1/en
Publication of CN114827436A publication Critical patent/CN114827436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Abstract

An image pickup method and an image pickup device are provided, the method comprises: the single camera adjusts different postures at different moments and executes different services, so that the camera executes various different services in a time-sharing and point-location manner in a complex scene.

Description

Camera shooting method and device
Technical Field
The present application relates to the field of cameras, and in particular, to a method and an apparatus for capturing images.
Background
With the development of technologies such as internet of things and artificial intelligence, intelligent security is widely applied in the fields of community security, safe cities, smart cities and the like. In intelligent security, the ball machine has irreplaceable effect, and the ball machine indicates rotatable spherical camera, and the ball machine is installed in all kinds of public places such as street, garden usually, and the installation place of ball machine is different, and the business that the ball machine was carried out is also different, and common business has: face detection, vehicle detection, intrusion detection of personnel, violation detection, and the like.
In general, a cruising setting of a dome camera enables one dome camera to execute the same service at different point locations (corresponding to different scenes) in turn in an effective time period to complete a cruising task, however, when there are multiple service demands in a certain scene or different service demands in different scenes, under such a situation, multiple dome cameras need to be deployed and different dome cameras execute different services to meet the multiple service demands in a certain scene or different service demands in different scenes, and thus, the consumed hardware cost is high and the resource utilization rate is low.
Disclosure of Invention
The embodiment of the application discloses a camera shooting method and device, which can realize that a camera executes various different services in a time-sharing and point-location manner under a complex scene, improve the utilization rate of the camera and reduce the waste of resources under the complex scene.
In a first aspect, an embodiment of the present application provides an image capturing method applied to a camera, where the method includes: the camera collects a first image at a first moment and executes a first service on the first image; the camera collects a second image at a second moment, and executes a second service on the second image, wherein the second moment is different from the first moment, and the first service is different from the second service.
In the method, the camera can execute a first service on the first image acquired at the first moment and also can execute a second service on the second image acquired at the second moment, so that the camera can execute a plurality of different services in a time-sharing manner in a complex scene, the utilization rate of the camera is improved, and the waste of resources in the complex scene is effectively reduced.
In a possible implementation manner of the first aspect, before the camera acquires the second image at the second time, the method further includes: the first imaging view of the camera at the first time is adjusted to a second imaging view at a second time.
By implementing the implementation mode, the camera acquires the first image with the first shooting view at the first moment and acquires the second image with the second shooting view at the second moment, so that the camera executes various different services in a time-sharing and point-dividing manner under a complex scene, the utilization rate of the camera is improved, and the waste of resources under the complex scene is effectively reduced.
In one possible implementation manner of the first aspect, the shooting field of view of the camera may be adjusted by adjusting one or more of the following parameters of a pan-tilt of the camera: pan, Tilt, and Zoom, thereby causing the camera to adjust from a first capture view at a first time to a second capture view at a second time.
By implementing the implementation mode, the shooting view (or point position) of the camera can be adjusted by adjusting three parameters of translation Pan, Tilt and Zoom of the camera, so that the camera can acquire images in different shooting views at different moments, and the utilization rate of the camera is effectively improved.
In a possible implementation manner of the first aspect, before the camera acquires the second image at the second time, the method further includes: receiving an instruction to adjust a field of view; and continuing to acquire the first image in the first shooting view, and entering a step of adjusting the shooting view of the camera when the acquisition of the first image is finished.
In the implementation manner, after receiving the instruction for adjusting the field of view, the camera may be capturing the first image or in the process of executing the first service on the first image, and in this case, the camera may first wait for the first image to be acquired or processed, and then adjust the self-captured field of view at the second time. Therefore, the flexibility of the camera in executing the shooting process is improved, and the dynamic switching of the shooting view of the camera in a complex scene is realized.
In a possible implementation manner of the first aspect, the first service includes at least one of the following services: face recognition, vehicle recognition, man-in-the-air detection, parking violation detection, overspeed detection, red light running detection, intrusion detection, pedestrian stun detection, vehicle collision detection, car theft detection, and fighting detection.
By implementing the implementation mode, the camera has various executable services and rich functions, and is beneficial to improving the utilization rate of the camera and meeting service requirements in different scenes.
In a possible implementation manner of the first aspect, before the camera acquires the second image at the second time and performs the second service on the second image, the method further includes: sending a switching request to a server; and receiving a switching request response sent by the server, wherein the switching request response comprises an identifier of the second service, or the switching request response comprises the identifier of the second service and a second shooting view, and the identifier of the second service and the second shooting view respectively correspond to the second moment.
By implementing the implementation manner, the mapping relation related to the second time period, the identifier of the second service, the first time period, the first shooting view and the like is stored in the server, so that when the view of the camera needs to be adjusted, the camera sends a switching request to the server to acquire the identifier of the corresponding service or the parameter of the shooting view, and the storage resource in the camera is effectively saved.
In a possible implementation manner of the first aspect, before the camera acquires the first image at the first time, the method further includes: receiving a binding relationship between a first time period and a first service, wherein the first time belongs to the first time period; and receiving a binding relationship between a second time period and a second service, wherein the second time belongs to the second time period.
In a possible implementation manner of the first aspect, a mapping relationship between the first time period and the identifier of the first service is stored in the camera in advance, and a mapping relationship between the second time period and the identifier of the second service is stored in the camera in advance.
In a possible implementation manner of the first aspect, a mapping relationship between the first time period and the first shooting field of view is stored in the camera in advance, and a mapping relationship between the second time period and the second shooting field of view is stored in the camera in advance.
By implementing the implementation mode, the mapping relation among the first time period, the first shooting view and the identifier of the first service and the mapping relation among the second time period, the second shooting view and the identifier of the second service are stored in the camera, so that the consumption of instructions for controlling the camera is effectively saved, and partial storage pressure is shared by the server.
In a second aspect, an embodiment of the present application provides a method for configuring a camera, which is applied to a server, and the method includes: setting a first time period, and binding the first time period with a first shooting view and a first service; setting a second time period, and binding the second time period with a second shooting view and a second service; and sending configuration information to the camera, wherein the configuration information comprises a first mapping relation and a second mapping relation, the first mapping relation is a mapping relation between a first time period and the identifier of the first service, and the second mapping relation is a mapping relation between a second time period and the identifier of the second service.
In the method, the time period and the service corresponding to the time period are set, so that the services and the shooting views (or referred to as point positions) corresponding to the camera at different moments are different, thereby realizing that the camera executes different services in a time-sharing and point-position-dividing manner, being beneficial to improving the utilization rate of the camera and reducing the waste of hardware resources in a complex scene.
In a possible implementation manner of the second aspect, the first mapping relationship further includes: a mapping relationship between a first time period and a first shooting field of view; the second mapping relationship further includes: and a mapping relationship between the second time period and the second photographing field of view.
In the implementation manner, the first mapping relationship is a mapping relationship among the first time period, the identifier of the first service, and the first shooting view, and the second mapping relationship is a mapping relationship among the second time period, the identifier of the second service, and the second shooting view. It should be noted that the service and the shooting view determine the scene, that is, the first time period and the second time period correspond to different scenes.
In one possible implementation manner of the second aspect, the method further includes: receiving a cruise time period input through a user interface, dividing the cruise time period into a plurality of time periods, wherein the plurality of time periods comprise a first time period and a second time period, and the plurality of time periods further comprise: and executing the time periods of the services corresponding to the first time period and the second time period alternately according to the length of the first time period and the length of the second time period.
By implementing the implementation manner, only the cruise time period and the first time period and the second time period in the cruise time period are set on the user interface, so that the business corresponding to the first time period and the business corresponding to the second time period can be alternately executed in turn in the cruise time period, the configuration time of the camera is saved, and the configuration efficiency of the camera is improved. The first time period and the second time period correspond to different scenes, and the scenes are determined by service and shooting visual fields (or called point locations).
In a third aspect, an embodiment of the present application provides an apparatus for imaging, the apparatus including: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image at a first moment, and the processing unit is used for executing a first service on the first image; the acquisition unit is further used for acquiring a second image at a second moment, and the processing unit is further used for executing a second service on the second image, wherein the second moment is different from the first moment, and the second service is different from the first service.
In a possible implementation manner of the third aspect, the processing unit is further configured to: the first imaging view of the camera at the first time is adjusted to a second imaging view at a second time.
In a possible implementation manner of the third aspect, the processing unit is specifically configured to: adjusting the shooting visual field of the camera by adjusting one or more of the following parameters of a tripod head of the camera: pan, Tilt, and Zoom.
In a possible implementation manner of the third aspect, the apparatus further includes: a receiving unit for receiving an instruction to adjust a field of view; the processing unit is used for continuously collecting the first image in the first shooting view field, and when the first image is collected, the step of adjusting the shooting view field of the camera is started.
In a possible implementation manner of the third aspect, the first service includes at least one of the following services: face recognition, vehicle recognition, man-in-the-air detection, parking violation detection, overspeed detection, red light running detection, intrusion detection, pedestrian stun detection, vehicle collision detection, car theft detection, and fighting detection.
In a possible implementation manner of the third aspect, the apparatus further includes: a sending unit, configured to send a handover request to a server; the receiving unit is further configured to receive a handover request response sent by the server, where the handover request response includes an identifier of the second service; or the switching request response comprises an identifier of the second service and a second shooting view, and the identifier of the second service and the second shooting view respectively correspond to the second moment.
In a fourth aspect, an embodiment of the present application provides an apparatus for camera configuration, where the apparatus includes: the configuration unit is used for setting a first time period and binding the first time period with a first shooting view and a first service; the configuration unit is also used for setting a second time period and binding the second time period with a second shooting view and a second service; and the sending unit is used for sending configuration information to the camera, wherein the configuration information comprises a first mapping relation and a second mapping relation, the first mapping relation is a mapping relation between a first time period and the identifier of the first service, and the second mapping relation is a mapping relation between a second time period and the identifier of the second service.
In a possible implementation manner of the fourth aspect, the first mapping relationship further includes: a mapping relationship between a first time period and a first shooting field of view; the second mapping relationship further includes: and a mapping relationship between the second time period and the second photographing field of view.
In a possible implementation manner of the fourth aspect, the receiving unit is further configured to: receiving a cruise time period input through a user interface, dividing the cruise time period into a plurality of time periods, wherein the plurality of time periods comprise a first time period and a second time period, and the plurality of time periods further comprise: and executing the time periods of the services corresponding to the first time period and the second time period alternately according to the length of the first time period and the length of the second time period.
In a fifth aspect, an embodiment of the present application provides a camera, which includes a lens, a sensor, and a processor, where the lens is configured to collect first light at a first time, and the sensor is configured to perform photoelectric conversion on the first light to generate a first image; the lens is also used for collecting second light at a second moment, the sensor is also used for carrying out photoelectric conversion on the second light to generate a second image, and the first moment is different from the second moment; a processor: for performing a first service on a first image and a second service on a second image, the first service and the second service being different.
In a possible embodiment of the fifth aspect, the camera further comprises: cloud platform for the shooting field of vision of adjustment camera specifically includes: the first imaging view corresponding to the first time is adjusted to a second imaging view corresponding to the second time.
In a sixth aspect, an embodiment of the present application provides an apparatus, which includes a display screen, a processor, and a communication module, where the display screen is configured to display a user interface, the processor is configured to receive a user operation to configure a camera on the user interface, and the communication module is configured to send configuration information of the camera to the camera.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium storing program code for execution by an apparatus, where the program code includes instructions for performing the method in the first aspect or any possible implementation manner of the first aspect.
In an eighth aspect, the present application provides a computer-readable storage medium storing program code for execution by an apparatus, where the program code includes instructions for performing the method of the second aspect or any possible implementation manner of the second aspect.
In a ninth aspect, the present application provides a computer program software product comprising program instructions, which when executed by an apparatus, performs the method of the first aspect or any possible embodiment of the first aspect. The computer software product may be a software installation package, which, in case it is required to use the method provided by any of the possible designs of the first aspect described above, may be downloaded and executed on a device to implement the method of the first aspect or any of the possible embodiments of the first aspect.
In a tenth aspect, the present application provides a computer program software product comprising program instructions that, when executed by an apparatus, cause the apparatus to perform the method of the second aspect or any of the possible embodiments of the second aspect. The computer software product may be a software installation package, which, in case it is required to use the method provided by any of the possible designs of the first aspect described above, may be downloaded and executed on a device to implement the method of the second aspect or any of the possible embodiments of the second aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1A is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 1B is a schematic diagram of another system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a method for configuring a cruise function of a ball game machine according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a scene configuration interface provided in an embodiment of the present application;
FIG. 4A is a schematic illustration of a cruise configuration interface provided by an embodiment of the present application;
FIG. 4B is a schematic illustration of yet another cruise configuration interface provided by an embodiment of the present application;
fig. 5 is a flowchart of an image capturing method provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a cruise schedule provided by an embodiment of the present application;
fig. 7 is a flowchart of another image capturing method provided in this embodiment of the present application;
FIG. 8 is a schematic structural diagram of an apparatus provided in this embodiment of the present application;
fig. 9 is a schematic structural diagram of a camera provided in this embodiment of the present application;
FIG. 10 is a functional block diagram of another apparatus provided in this embodiment of the present application;
fig. 11 is a functional structure diagram of an apparatus provided in this embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The terms "first", "second", and the like in the description and in the claims in the embodiments of the present application are used for distinguishing different objects, and are not used for describing a particular order.
For the sake of understanding, the following description will be made about terms and the like that may be referred to in the embodiments of the present application.
(1) Ball machine
The ball machine is also called as a spherical camera and integrates multiple functions of an integrated camera (comprising a zoom lens), a tripod head, a decoder, a protective cover and the like into a whole, wherein the tripod head can also be called as a turntable and is supporting equipment for installing and fixing the camera, the tripod head can be divided into a fixed tripod head and an electric tripod head, and the fixed tripod head can be locked in position after the horizontal angle and the pitching angle of the camera are adjusted, so that the corresponding monitoring range of the fixed tripod head is fixed and limited; the electric pan-tilt can control the camera to horizontally rotate or vertically rotate through a program or a keyboard so as to change the monitoring range, so that the corresponding monitoring range of the electric pan-tilt is wide and changeable. It should be noted that the ball machine in the embodiment of the present application selects a rotatable pan/tilt head.
In security monitoring application, PTZ is short for Pan/Tilt/Zoom, and represents the azimuth (horizontal/vertical) movement of a holder and Zoom control of a lens, and is short for holder control. Wherein Pan represents the horizontal direction movement control of the holder, Tilt represents the vertical direction movement control of the holder, and Zoom represents the Zoom control of the lens.
The embodiment of the invention is not only applicable to the dome camera, but also applicable to other cameras capable of adjusting the visual field, wherein the visual field adjustment includes horizontal visual field adjustment, vertical visual field adjustment or zoom adjustment of a lens of the camera, and for convenience of description, the dome camera is exemplified hereinafter.
(2) Point location (point)
The point location includes one or more of a horizontal angle, a pitch angle, and a focal length of the lens. So-called setting of the spot: the method is characterized in that PTZ parameters of the dome camera are set, namely at least one of three parameters of a horizontal angle, a pitching angle and a focal length of a lens of the dome camera is set at the point position, wherein the horizontal angle corresponds to a parameter P (translation) in the PTZ, the pitching angle corresponds to a parameter T (inclination) in the PTZ, and the focal length of the lens corresponds to a parameter Z (zooming) in the PTZ. A ball machine can be provided with a plurality of point locations, and the ball machine can realize the switching of the point locations through the own cradle head. Since one point of the dome camera indicates one imaging field of view, switching the point of the dome camera is equivalent to adjusting the imaging field of view of the dome camera.
The ball machine is widely applied to monitoring of an open area, and the cruise setting of the ball machine is generally as follows: the method comprises the steps that a plurality of point locations and a cruise time period corresponding to each point location are set for the dome camera, wherein each point location corresponds to one scene, and the plurality of point locations are bound with the same service (such as face detection), so that the dome camera can execute the same service in a time-sharing and point-dividing mode to complete a cruise task. For example, in a certain park, three points are set for the ball machine, which are respectively: point 1, garden entrance; point 2, roads in the garden area; point 3, edge of the perimeter wall of the garden. In addition, the cruising period position corresponding to the point location 1 is set to be 8:00AM to 9:30AM, the period corresponding to the point location 2 is set to be 10:00AM to 17:00PM, the period corresponding to the point location 3 is set to be 19:00PM to 22:00PM, the three point locations are respectively bound with the face detection service, and after the setting is completed, the dome camera sequentially collects images at the three set point locations according to a set time schedule and executes the face detection service on the images.
However, if the user needs: face detection and vehicle detection are executed at a point position 1 (entrance of a garden), face detection is executed at a point position 2 (road in the garden), face detection is executed at a point position 3 (edge of a fence of the garden), and time periods corresponding to the three point positions are unchanged. In this case, two dome cameras need to be deployed, where the cruise setting of the dome camera 2 is the same as that described above, a point location 1 (entrance to the park) is set for the dome camera 2, the corresponding time period is still 8:00AM to 9:30AM, and the point location 1 is bound to the service of vehicle detection, so as to meet the user demand.
However, if the user needs: face detection (8:00AM to 9:30AM) is performed at point 1 (campus entrance), violation detection (10:00AM to 17:00PM) is performed at point 2 (campus road), intrusion detection (19:00PM to 22:00PM) is performed at point 3 (campus border). In this case, 3 dome cameras need to be deployed, where the dome camera 1 is responsible for face detection at the point location 1, the dome camera 2 is responsible for violation detection at the point location 2, and the dome camera 3 is responsible for intrusion detection at the point location 3, so as to meet user requirements. Therefore, when multiple service requirements exist in a certain scene or different service requirements exist in different scenes, the number of deployed ball machines needs to be increased to meet the user requirements. Although the number of ball machines is increased, the utilization rate of each ball machine is low, which not only causes the consumption of hardware cost to be increased, but also causes the waste of resources.
In view of the above existing problems, the embodiment of the present application provides a method for cruising a dome camera, which can meet multiple service requirements in the same scene or different service requirements in different scenes by using as few dome cameras as possible, thereby not only saving hardware cost, but also improving the utilization rate of resources.
A system architecture provided by the embodiments of the present application is described below. Referring to fig. 1A, fig. 1A is a system architecture diagram provided in an embodiment of the present application. As shown in fig. 1A, the system includes a camera and a server, wherein the camera can communicate with the server in a wireless or wired manner.
The server provides a user interface, the user interface can be used for a user to carry out cruise task configuration on the camera in advance, and the cruise task configuration specifically comprises the following steps: sequentially carrying out scene configuration on a plurality of scenes to be cruising so as to enable each scene to bind a point location and at least one service; and performing cruise configuration on each scene to determine a cruise time period corresponding to each scene and a monitoring time length corresponding to each scene. It will be appreciated that each point location corresponds to a field of view (otherwise known as a surveillance view) of the camera. After configuration is completed, the stored scene parameter information and cruise time information are sent to a camera, wherein the scene parameter information comprises an identifier of each scene in a plurality of scenes to be cruising, a point position corresponding to each scene and a service identifier corresponding to each scene, and the cruise time information comprises an identifier of each scene in the plurality of scenes to be cruising, a cruise time period corresponding to each scene and a monitoring duration corresponding to each scene. It should be noted that, in some possible embodiments, the scene parameter information and the cruise time information may also be combined into one piece of information, for example, the cruise configuration information, and the present application is not limited in particular.
In a specific implementation, a user may directly access a website of a configuration interface corresponding to the camera on the server, so that the configuration interface of the camera is displayed on the user interface of the server, and after the camera is configured on the configuration interface in a related manner, related configuration information is directly stored in the local area of the camera. In another specific implementation, a client for configuring the camera may be installed in the server, the user may perform relevant configuration on the camera by clicking the client, and after the configuration is completed, the camera may automatically store the scene parameter information and the cruise time information, or the server may send the scene parameter information and the cruise time information of the camera to the camera. It should be noted that the server may be a computing device in a cloud environment or a computing device in an edge environment, and the present application is not limited specifically.
The camera is a camera device with a holder and a service processing function, a lens of the camera device can rotate along with the holder, and the camera is used for acquiring a scene image under a monitoring view angle corresponding to a point position and performing related service processing on the scene image. The camera device is arranged on the holder, and the holder can control the camera device to horizontally rotate or vertically rotate so as to change the monitoring range of the camera device. The camera may be a dome camera, a pan-tilt camera, a PTZ camera, or other imaging devices with a turntable, and the present application is not particularly limited. The camera is used for determining that the current moment is the scene switching moment according to the cruise time information, if the current moment corresponds to the next scene, closing the service corresponding to the previous scene (under the condition that the previous scene exists), starting the service corresponding to the next scene, and adjusting the point location corresponding to the previous scene to the point location corresponding to the next scene. In some possible embodiments, if the scene parameter information further includes priority information corresponding to a scene, in the presence of a next scene, if the priority corresponding to a previous scene (in the presence of the previous scene) is detection priority, in this case, if a target is detected in a scene image corresponding to the previous scene, the camera may delay switching to the next scene until a detection task of the previous scene ends.
In a specific implementation, a scene management module, a cruise module, and a service module are deployed in a camera, where the scene management module is configured to receive a scene switching request sent by the cruise module and determine whether to perform scene switching currently, and when it is determined that the scene switching is performed, the scene management module is configured to instruct the service module to close a service corresponding to a previous scene (in the presence of the previous scene) and open a service corresponding to a next scene, and the scene management module is further configured to send a point location corresponding to the next scene to the cruise module. The cruise module is used for judging whether the current moment is the scene switching moment according to the cruise time information of the scene, sending a scene switching request to the scene management module to acquire a point location corresponding to the next scene when the current moment is determined to be the scene switching moment, and adjusting the camera device to the point location corresponding to the next scene after receiving the point location corresponding to the next scene sent by the scene management module. The service module is used for closing or opening a service corresponding to a certain scene according to the instruction information issued by the scene management module.
Referring to FIG. 1B, FIG. 1B illustrates yet another system architecture diagram of the present application. As shown in fig. 1B, the system includes a camera and a server, wherein the camera can communicate with the server in a wireless or wired manner.
In fig. 1B, the camera is a camera device with a pan/tilt and a service processing function, and the camera is configured to acquire a scene image at a monitoring view angle corresponding to a point location and perform related service processing on the scene image. The camera device is arranged on the holder, and the holder can control the camera device to horizontally rotate or vertically rotate so as to change the monitoring range of the camera device. The camera may be a dome camera, a pan-tilt camera, a PTZ camera, or other imaging devices with a turntable, and the present application is not particularly limited.
In fig. 1B, a cruise module and a service module are disposed in the camera, where the cruise module is configured to obtain cruise time information from the server, and the description of the cruise time information may refer to the related description in fig. 1A, which is not described herein again. The cruise module detects that the current moment is the scene switching moment according to the cruise time information, and sends a scene switching request to the server, wherein the scene switching request comprises the identifier of the next scene. The cruising module is also used for receiving the point location corresponding to the next scene sent by the server and adjusting the camera device to the point location corresponding to the next scene. The service module is used for receiving the instruction information sent by the server and closing or opening the service corresponding to a certain scene according to the instruction information. In some possible embodiments, the cruise module is further configured to determine whether to perform scene switching at the current time when the current time is the scene switching time, and send a scene switching process to the server when it is determined that the scene switching is performed at the current time. The specific determination process can be referred to as described below.
In fig. 1B, the server provides a user interface, the user interface is used for a user to perform cruise task configuration on the camera in advance, the details of the cruise task configuration may specifically refer to the related description about the server in fig. 1A, which is not described herein again, after the configuration is completed, the server may obtain cruise time information and scene parameter information, and the descriptions of the cruise time information and the scene parameter information may refer to the description above, which is not described herein again. The server is also used for sending the cruise time information to the camera. And a scene management module is deployed in the server and used for sending first information to the camera after receiving a scene switching request from the camera, wherein the first information comprises a point position corresponding to a next scene and a service identifier corresponding to the next scene. In some possible embodiments, the scene management module is configured to determine, when receiving a scene switching request from a camera, whether the camera satisfies a scene switching condition at a current time, where the scene switching condition may specifically refer to the description below, and send instruction information to the camera when the scene switching condition is satisfied, where the instruction information includes a point location corresponding to a next scene and a service corresponding to the scene, and in some possible embodiments, the instruction information further includes information for instructing to close the service corresponding to the current scene.
It can be seen that, in the camera in fig. 1A or the camera in fig. 1B, the service module and the cruise module are disposed in the camera, that is, it is explained that the service processing process on the scene image acquired by the camera is executed in the camera, in the system architecture shown in fig. 1A, the judgment on whether the scene switching condition is satisfied is executed by the camera itself, and in the system architecture shown in fig. 1B, the judgment on whether the scene switching condition is satisfied can be executed by the camera or the server, which is not specifically limited in the present application.
In the following, the camera is not exemplified by a ball machine, but the embodiment of the present application does not limit the camera to be only a ball machine.
It should be noted that before the dome camera executes the cruise task, the cruise task configuration needs to be performed on the dome camera first, so that each scene in the cruise path is bound with one shooting view and at least one service, wherein the scene is determined by the shooting view and the service together, and the services bound in different scenes can be different, so that different service requirements in the same scene or different service requirements in different scenes can be met by the dome camera as few as possible. Referring to fig. 2, fig. 2 is a flowchart of a method for configuring a cruise function of a ball game machine according to an embodiment of the present application, where the method includes, but is not limited to, the following steps:
s101, setting a plurality of scenes to be navigated by the dome camera on a user interface, and carrying out scene configuration on each scene in the plurality of scenes.
In the embodiment of the application, a plurality of scenes to be navigated by the dome camera are set on the user interface, and each scene in the plurality of scenes is configured, wherein configuring the scene for each scene means: binding a point location (or referred to as a shooting view) and at least one service for each scene. After the scene configuration is completed, scene parameter information can be obtained, wherein the scene parameter information includes an identifier of each scene in a plurality of scenes to be cruising, a point location corresponding to each scene, and a service identifier corresponding to each scene. It should be noted that the scenes are determined by point locations and services together, and for any two scenes, if and only if the point locations corresponding to the two scenes are the same and the services corresponding to the two scenes are also the same, the two scenes are the same scene. If the point locations corresponding to the two scenes are different or the services corresponding to the two scenes are different, the two scenes are different scenes.
The point location represents a shooting visual angle (or called as a shooting visual field) of the dome camera, the point location for setting the dome camera means setting a PTZ coordinate of the dome camera, that is, a horizontal angle, a pitch angle and a focal length of the dome camera, wherein the range of the horizontal angle may be [0,2 pi ], the range of the pitch angle may be (-pi, 0), the focal length determines an imaging size, a field angle size, a depth of field size and the like of a shot object shot by the lens, and the shorter the focal length is, the smaller the imaging is, the larger the field angle is, and the longer the depth of field is. In some possible embodiments, the point location further includes an aperture value, a magnification value, and the like.
A business represents an algorithm or program for detecting a target volume or target event in a monitored environment, such as identifying a target volume in an image or identifying a target event in an image. Wherein, the target body can be people, people's face, vehicle etc. and the target incident can be that the vehicle breaks away, the vehicle is overspeed, the vehicle makes a dash across the red light, people steal the car, vehicle collision, pedestrian fight, the pedestrian is stunned, pedestrian invades etc.. For example, the service may be face detection, face recognition, vehicle detection, parking violation detection, overspeed detection, intrusion detection, detection of an airplane or an unmanned aerial vehicle, red light running detection, vehicle collision detection, pedestrian falling detection, vehicle theft detection, and the like, and the present application is not limited specifically. Service parameters such as preset detection frame size, snapshot mode, image brightness compensation coefficient, etc. can be set for the service. Taking a service bound in a certain scene as face detection as an example, service parameters of the service in the scene can be set, for example, preset detection frame size, snapshot mode, face sensitivity, face brightness compensation coefficient, and the like. Taking the preset detection frame size as an example, when the face detection processing is performed on the scene image collected by the dome camera at the point, and when the size of the detected face frame is smaller than the preset detection frame size, the face frame is not taken as the detected face. It should be noted that the service parameters of different services may be different, for example, the preset detection frame size for face detection is set to 20 × 20, and the preset detection frame size for vehicle detection may be set to 30 × 30. It can be seen that, a plurality of services are bound to a certain scene and service parameters of the services are configured respectively, so that configuration separation of different services in different scenes can be realized.
In some possible embodiments, the same service may have different service parameters in different scenarios. The face detection is not used as an example, suppose that the ball machine sets up in the garden, scene 1 corresponds a certain road in the garden, scene 2 corresponds the square outside the garden, if scene 1, scene 2 all binds with this business of face detection, because the distance of square outside the garden apart from the ball machine is greater than the distance of road apart from the ball machine in the garden, the detection frame size that predetermines that face detection business corresponds in the event 2 is less than the detection frame size that predetermines that face detection business corresponds in scene 1, the configuration separation of the same kind of business under the different scenes has been realized from this.
In some possible embodiments, in addition to binding a point location and at least one service for each scene, configuring a scene for each scene further includes setting a priority of the service corresponding to each scene, where the priority includes a detection priority and a non-detection priority, and the priority may be used to subsequently determine whether the ball machine satisfies a scene switching condition. Specifically, if the ball machine has a next scene and the priority of the service corresponding to the previous scene is "non-detection priority", the ball machine satisfies the scene switching condition; if the next scene exists in the dome camera, the priority of the service corresponding to the previous scene is 'detection priority' and the target body is detected in the scene image corresponding to the previous scene, the dome camera does not meet the scene switching condition; if the ball machine exists in the next scene, the priority of the service corresponding to the previous scene is 'detection priority' and no target body is detected in the scene image corresponding to the previous scene, the ball machine meets the scene switching condition.
It should be noted that the user interface may be provided by the server. In a specific implementation, the user interface may be a configuration interface generated by the server in response to an operation of accessing a configuration website of the dome camera by the user, specifically, the dome camera and the server are connected wirelessly, and the user inputs the configuration website of the dome camera in the server, in which case, the user interface of the server is a configuration webpage of the dome camera. In another specific implementation, the user interface may also be a configuration interface provided by a client installed in the server and used for configuring the ball machine, specifically, the client for configuring the ball machine is installed in the server, and the user clicks the client, in which case, the user interface of the server is the configuration interface provided by the client.
For example, the user interface may be an interface shown in fig. 3, it should be noted that the interface shown in fig. 3 may also be referred to as a scene configuration interface, fig. 3 is only an example of a scene configuration interface, and the scene configuration interface in the present application is not limited to the one shown in fig. 3, and the scene configuration interface may be any interface that implements the scene configuration function described above. As shown in fig. 3, fig. 3 is a dome camera ID-scene configuration interface, where ID represents an identifier of a dome camera, and the interface is listed with a plurality of functional modules, for example, a point location binding module, a service binding module, an added scene module, a monitoring view angle module, and the like, where the point location binding module is used for point location setting, and input frames of a horizontal angle, a pitch angle, and a focal length are sequentially listed in the point location binding module. In some possible embodiments, an input frame of an aperture may also be added in the point location binding module; the service binding module is used for setting services, service options such as 'face detection 1' and 'vehicle detection 2' are listed in the service binding module, and by taking 'face detection 1' as an example, the '1' is an identifier of the face detection and uniquely represents the face detection, and in addition, service parameters can be set, for example, the size of a detection frame is preset; the monitoring visual angle module is used for displaying a scene image collected on a point location set on an interface at present, and can assist a user in carrying out point location binding setting on a scene. The added scene module is used for displaying relevant parameters of the currently set scene. For example, in the added scene module in fig. 3, "scene 1: point locations (40 °, -40 °,50), face recognition 3 and vehicle detection 2, detection priority ". An "add" key is further provided on the interface shown in fig. 3, and after the point location binding module, the service binding module, and the priority setting module are set correspondingly, the "add" key is clicked, so that the relevant configuration of the currently set scene is added to the added scene module.
Optionally, the interface shown in fig. 3 may further be provided with a priority setting module, where the priority setting module is configured to set a priority for the service selected in the service binding module. It is to be understood that, in the interface shown in fig. 3, the service binding module is separated from the priority setting module, so that when a plurality of services are selected in the service binding module, if the selected plurality of services are preferentially set, the selected plurality of services have the same priority setting, for example, both of "detection priority" and "non-detection priority". In some possible embodiments, the priority setting module may also be disposed in a service binding module (not shown in fig. 3), so that the priority setting may be performed on each of the plurality of services selected in the service binding module, and thus different services may have different priority settings. It should be noted that, when the priority is set to "detection priority", whether a target is detected in a currently captured scene image needs to be considered when determining whether a scene switching condition is satisfied.
Specifically, a user performs scene configuration on the user interface shown in fig. 3, and in the point binding module, the user may input the horizontal angle, the pitch angle, and the focal length of the point location through a keyboard, for example, the point location corresponding to the scene 2 is "horizontal angle 60 °, pitch angle-30 °, and focal length 85 mm". In a specific implementation, a progress bar (not shown in fig. 3) may be further disposed beside the horizontal angle, the pitch angle, and the focal length in the point location binding module, the horizontal angle, the pitch angle, and the focal length are respectively set by dragging the corresponding progress bar, and a value corresponding to the current position is displayed beside the progress bar. In another specific implementation, two adjusting keys, namely a "+" key and a "-" key, can be further arranged in the point location binding module, and can be used for adjusting the horizontal angle, the pitch angle or the focal length. For example, the horizontal angle is set, assuming that the current horizontal angle is 50 °, the angle is found to be slightly left according to the scene image displayed by the monitoring view module on the right side, and the horizontal angle can be increased to a proper value by clicking the "+" key in conjunction with the scene image displayed by the monitoring view module. In the service binding module, service configuration is performed on the scene 2, the service of 'face detection 1' is selected, and the size of a preset detection frame is set to be 20 × 20. Optionally, the priority of the service "face detection 1" may also be set, for example, to check "non-detection priority", that is, to indicate that the priority of the corresponding service is set as non-detection priority. In summary, after completing point location binding, service binding, and priority setting (if existing) of scene 2, and clicking the "add" button on the right side, the scene configuration information of scene 2 is stored, and a "scene 2" is added to the added scene module in fig. 3: point location (60 °, -30 °,85), face detection 1, non-detection priority information, that is, information indicating that two scenes have been configured for the dome camera at present: scene 1 and scene 2, respectively. The scene configuration method according to the scene 2 may be configured for other scenes required by the dome camera, and details are not repeated herein.
It should be noted that, for the above-mentioned scene configuration process, the embodiment of the present application does not limit the execution sequence of the point corresponding to the setting scene, the service corresponding to the setting scene, and the priority of the service corresponding to the setting scene.
And S102, performing cruise configuration on each scene in the plurality of scenes at the user interface.
In this embodiment of the present application, after a plurality of scenes are set for the dome camera, cruise configuration needs to be performed on each of the plurality of scenes on the user interface, where performing cruise configuration on each scene means: a cruise period is set for each scene. In some possible embodiments, when a plurality of scenes correspond to the same cruise time period, in this case, the monitoring time length corresponding to each scene may also be set. For the related description of the user interface, reference may be made to the related description of S101, which is not described herein again.
The cruising time period represents the working time period of the dome camera, and the cruising time period of the scene represents that the scene is effective in the cruising time period. For example, if the cruising time period of the scene 1 is only set to be 8:00AM to 10:00AM, the scene 1 is monitored in the time period from 8:00AM to 10:00AM, specifically, the ball machine may collect the scene image at the point corresponding to the scene 1 and execute the service corresponding to the scene 1 on the scene image in the time period from 8:00AM to 10:00 AM. The monitoring duration corresponding to the scene refers to the duration or the residence time of the dome camera in the scene each time, and also refers to the duration of the dome camera continuously monitoring the scene each time. For example, if there are two corresponding scenes in the cruising time period from 8:00AM to 9:00AM, namely scene 1 and scene 2, the monitoring duration of scene 1 is set to be 30min and the monitoring duration of scene 2 is set to be 30min, which means that scene 1 can be monitored at 8:00AM-8:30AM and scene 2 can be monitored at 8:31AM-9:00AM, and scene 2 can be monitored at 8:00AM-8:30AM and scene 1 can be monitored at 8:31AM-9:00AM within the cruising time period from 8:00AM to 9:00AM, which is not particularly limited in the present application.
In one specific implementation, when a plurality of scenes exist in a certain cruising period, the cruising sequence of each scene in the cruising period can be set, so that the ball machine can cruise in turn according to the cruising sequence of each scene in the cruising period. For example, if a scene 1 and a scene 2 correspond to the 8:00AM to 9:00AM, the monitoring time duration of the scene 1 is 30min, the monitoring time duration of the scene 2 is 30min, and the cruising sequence of the scene 1 is 1 and the cruising sequence of the scene 2 is 2, the scene 1 is monitored before the 8:00AM-8:30AM and the scene 2 is monitored after the 8:31AM-9:00AM in the 8:00AM to 9:00 AM.
In one implementation, when there are multiple scenes in a cruise time period, the monitoring duration of each scene in the multiple scenes determines the number of times that each scene cruises in turn in the cruise time period. For example, if 8:00AM to 9: and a scene 1 and a scene 2 are arranged in the 00AM, the monitoring time of the scene 1 is 30min, and the monitoring time of the scene 2 is 30min, then the monitoring time ranges from 8:00AM to 9: in 00AM, scene 1 and scene 2 may be monitored once, respectively. If the scene 1 and the scene 2 are arranged in the 8:00AM to 9:00AM, the monitoring time of the scene 1 is 15min, and the monitoring time of the scene 2 is 15min, the scene 1 and the scene 2 can be respectively monitored twice in the 8:00AM to 9:00AM, and if the cruising sequence of the scene 1 is set as the first cruising sequence and the cruising sequence of the scene 2 is set as the second cruising process in the 8:00AM to 9:00AM is: scene 1(8:00AM to 8:15AM) -scene 2(8:15AM to 8:30AM) -scene 1(8:30AM to 8:45AM) -scene 2(8:45AM to 9:00AM), i.e. scene 1 and scene 2 are monitored twice in turn.
Therefore, after the cruise time period and the monitoring time length corresponding to each scene in the plurality of scenes corresponding to the cruise time period are set, the scene switching time corresponding to the cruise time period can be determined according to the cruise time period and the monitoring time lengths of the scenes. For example, if the cruise process in 8:00AM to 9:00AM is: scene 1(15min) -scene 2(15min) -scene 1(15min) -scene 2(15min), there are four scene switching times corresponding to 8:00AM to 9:00AM, which are 8:00AM, 8:15AM, 8:30AM, and 8:45AM, respectively. When a certain cruising time period only corresponds to a scene, the starting time of the cruising time period is a scene switching time of the scene.
In some possible embodiments, after the cruise time period and the monitoring time periods of the scenes in the cruise time period are set, the cruise time period is automatically divided into a plurality of time periods, and the configuration efficiency of the dome camera is improved. It is not assumed that the multiple time periods include a first time period and a second time period, where the first time period is set to be bound to a first shooting view (or referred to as a first point location) and a first service, and the second time period is set to be bound to a second shooting view (or referred to as a second point location) and a second service. It should be noted that, after configuration is completed, the server sends configuration information to the ball machine, where the configuration information includes a first mapping relationship and a second mapping relationship, where the first mapping relationship is a mapping relationship between a first time period, an identifier of a first service, and a first shooting view, and the second mapping relationship is a mapping relationship between a second time period, an identifier of a second service, and a second shooting view. Since the scene is determined by both the shooting field of view and the service, the first mapping relationship corresponds to a scene indicating that the first time zone corresponds to the first time zone, and the second mapping relationship corresponds to a scene indicating that the second time zone corresponds to the second time zone. Therefore, the ball machine can execute different services at different points at different moments or execute different services at the same point at different moments.
It can be understood that the first time period is the monitoring duration of the first shooting view and the scene determined by the first service, and the second time period is the monitoring duration of the second shooting view and the scene determined by the second service. When the sum of the length of the first time period and the length of the second time period is less than the length of the cruise time period, the plurality of time periods further include: and executing the time periods of the services corresponding to the first time period and the second time period alternately according to the length of the first time period and the length of the second time period.
For example, assume that a cruising period is set to be 9:00AM-10:00AM, the cruising period corresponds to scene 1 and scene 2, wherein the cruising sequence of scene 1 is prior to the cruising sequence of scene 2, the monitoring duration of scene 1 is 10min, the monitoring duration of scene 2 is 20min, scene 1 is bound with point 1, service 1 and service 2, and scene 2 is bound with point 2 and service 3. As can be seen from table 1, after the configuration is completed, the cruise time period 9:00AM-10:00AM is automatically divided into four time periods, namely, a 9:00AM-9:10AM monitoring scene 1, a 9:10AM-9:30AM monitoring scene 2, a 9:30AM-9:40AM monitoring scene 1, and a 9:40AM-10:00AM monitoring scene 2, where, for example, the 9:00AM-9:10AM may be the first time period, the 9:10AM-9:30AM, or the 9:40AM-10:00AM may be the second time period. As another example, the first time period may be 9:30AM-9:40AM and the second time period may be 9:40AM-10:00 AM.
TABLE 1
Figure BDA0002921767470000121
In some possible embodiments, the cruise configuration for each scenario may also be: firstly, a certain cruise time period is set, and then, the set scenes are added into the cruise time periods according to requirements. When a plurality of scenes are added in a certain cruising time period, the monitoring duration of each scene in the cruising time period can be set in sequence.
For example, referring to fig. 4A, fig. 4A is an example of a cruise configuration interface provided by the present application, and the cruise configuration interface is listed with a cruise time period module, a scene identification module, a cruise route module, and the like, where the cruise time period module includes a time scale bar and a slide bar, the length of the slide bar is adjustable, and a cruise time period is set by dragging the slide bar; the scene identification module displays each scene created currently, lists input frames of monitoring time length of each scene and input frames of a cruising sequence corresponding to the scene, and can be used for a user to select the scene to be added to the cruising time period set in the cruising time period module, set the monitoring time length corresponding to the scene and the like; the cruise path module is used for displaying the mapping relation between the currently set cruise time period and the added scenes, and can also display the monitoring duration, the cruise sequence and the like corresponding to the scenes. The interface shown in fig. 4A is further provided with a "save" key, and after the setting corresponding to the cruise time period module and the scene identification module is completed, the "save" key is clicked, so that the mapping relationship between the currently set cruise time period and the selected scene and the like are added to the cruise route module.
As shown in fig. 4A, a cruise time period may be set on the interface, for example, a slider is dragged to select the cruise time period of 9:00AM to 10:00AM, and then a scene corresponding to the cruise time period is selected from the created scenes, for example, scene 1 and scene 2 in the created scenes displayed in fig. 4A are selected, and the monitoring time period of scene 1 is set to 10min and the monitoring time period of scene 2 is set to 20 min. In some possible embodiments, when multiple scenarios are set in the same cruise period, the cruise order of the scenarios in the cruise period may also be set, for example, the cruise order of scenario 1 in 9:00AM-10:00AM is set to 1 and the cruise order of scenario 2 in 9:00AM-10:00AM is set to 2 in fig. 4A. After the cruise configuration in the cruise time period of 9:00AM-10:00AM is completed, the "save" key is clicked, and the cruise path module in fig. 4A adds records such as "cruise time period 9:00AM-10:00 AM", scene 1 (monitoring time period 10min cruise sequence 1), and scene 2 (monitoring time period 20min cruise sequence 2) ", so that the cruise time period corresponding to each scene, the monitoring time period corresponding to each scene, and the cruise sequence of each scene in the corresponding cruise time period can be clearly seen. The cruise configuration can be performed on other scenes in the cruise path according to the above manner, and the details are not repeated herein.
It should be noted that the interface shown in FIG. 4A may be used to set the cruising time period for the ball machine for each day. In some possible embodiments, the interface shown in FIG. 4A may also be dated to enable cruise configuration of the ball game in weeks. It should be noted that the cruising configurations of the dome cameras per day may be the same or different, and the embodiments of the present application are not particularly limited.
Referring to fig. 4B, fig. 4B is an example of another cruise configuration interface provided by the present application, where fig. 4B is a dome camera ID-cruise configuration interface, an ID represents an identifier of a dome camera, and a scene identifier input box, a cruise time period module, a monitoring time period input box, a cruise path module, and the like are listed on the interface, where a drop-down box is below the scene identifier input box, each scene that has been currently created is listed in the drop-down box, a user may select one scene from the drop-down box each time to configure a cruise time period, a monitoring time period, and the like, and after the user selects a scene, a monitoring time period corresponding to the scene may be set in the input box of the monitoring time period. In some possible embodiments, the interface shown in fig. 4B may also add an input box of the cruise order for the user to set the cruise order for the corresponding scene during the cruise period. The cruise time period module comprises a time scale bar and a sliding bar, the length of the sliding bar is adjustable, and a cruise time period can be set by dragging the sliding bar; the cruise path module is used for displaying a mapping relation between a set scene and a cruise time period, and also displaying a monitoring time length corresponding to the scene, a cruise sequence and the like. The interface shown in fig. 4B is further provided with a "save" key, and after the setting of the cruise time period module, the scene identifier, the monitoring duration, and the like is completed, the "save" key is clicked, and then the mapping relationship between the currently selected scene and the set cruise time period, and the like, is added to the cruise route module.
As shown in fig. 4B, a scene to be set, for example, scene 3, may be selected on the interface, the drag slider sets a cruise time period corresponding to the scene 3, for example, the drag slider occupies a time period of 14:00 to 18:00, and sets a monitoring time period of the scene 3 to be 20min, and finally, the "save" key is clicked, thereby completing the cruise configuration for the scene 3, and after the "save" key is clicked, a new piece of information, that is, the "scene identifier is 3, the cruise time period is 14:00PM to 18:00PM, and the monitoring time period is 20 min", is added to the cruise path module in fig. 4B. In some possible embodiments, the setting of the cruising time period on the interface shown in fig. 4B may be, besides dragging the slider bar, inputting a preset time period through a keyboard, and the application is not particularly limited. It should be noted that, in the interface shown in fig. 4B, a cruise sequence of a certain scene in a corresponding cruise time period may also be set. For example, in fig. 4B, it can be seen that both scene 1 and scene 2 that have been set in the cruise route correspond to a cruise time period from 9:00AM to 10:00AM, and in the cruise time period, the cruise sequence of scene 1 is 1, and the cruise sequence of scene 2 is 2, that is, scene 1 is monitored first and then scene 2 is monitored.
It should be noted that fig. 4A or 4B is only an example of a cruise configuration interface, but the present application does not limit the cruise configuration interface to the example shown in fig. 4A or 4B, and the cruise configuration interface may be any user interface that implements the cruise configuration function described above.
It should be noted that the interface shown in fig. 3 and the interface shown in fig. 4A (or fig. 4B) may be two separate and independent interfaces, and the condition for switching the interface shown in fig. 3 to the interface shown in fig. 4A (or fig. 4B) may be that after the scene configuration is completed, the interface may also be a skip key from entering the cruise configuration interface. In some possible embodiments, the interface shown in fig. 3 and the interface shown in fig. 4A (or fig. 4B) may also be combined into one interface, and the interface may perform the cruise configuration for the added scene while performing the scene configuration for the newly added scene. The display form of the interface shown in fig. 3 and the interface shown in fig. 4A (or fig. 4B) is not particularly limited in the embodiment of the present application.
In the present application, the execution sequence of S101 and S102 is not specifically limited, that is, the scene configuration (S101) may be performed first and then the cruise configuration (S102) may be performed, the cruise configuration (S102) may be performed first and then the scene configuration (S101) may be performed, or the scene configuration (S101) and the cruise configuration (S102) may be performed simultaneously.
After completing the configuration in S101 and S102, the server transmits configuration information on the ball machine to the ball machine.
The implementation of the cruise configuration method for the dome camera can be seen in that a point location and at least one service are bound for each scene in a plurality of scenes monitored by the dome camera in a cruise manner, a cruise time period is set for each scene, and the like, so that different service requirements in the same scene or different service requirements in different scenes are met by the dome camera as few as possible, and the utilization rate of a single dome camera is improved.
After the configuration of the cruising task of the dome camera is completed by the method described in the embodiment of fig. 2, the dome camera may execute the preset cruising task based on the stored relevant configuration information, thereby implementing the switching of different scenes and executing at least one service process on the acquired scene image.
Referring to fig. 5, fig. 5 is a diagram of an image capturing method provided in an embodiment of the present application, where the method includes, but is not limited to, the following steps:
s201, the dome camera detects that the current moment is the moment of switching to the second scene.
In the embodiment of the application, the cruise time information comprises an identifier of each scene in a plurality of scenes to be cruising by the dome camera and a cruise time period corresponding to each scene, the dome camera detects that the current moment is the scene switching moment according to the cruise time information, the current moment corresponds to the second scene, and in addition, the dome camera is located at the first point at the previous moment of the current moment. It should be noted that, the second scenario binds the second point location and the second service.
The second point location represents a shooting view (or a shooting visual angle) of the dome camera, and the second service can be face detection, face recognition, vehicle detection, parking violation detection, overspeed detection, intrusion detection, unmanned-unmanned detection, red light running detection, vehicle collision detection, pedestrian falling detection, vehicle theft detection and the like.
In some possible embodiments, when a plurality of scenes correspond to a certain cruise time period, the cruise time information further includes a monitoring time length corresponding to each scene and a cruise sequence corresponding to each scene. It should be noted that the cruise time information may be pre-stored in the dome camera, or may be obtained by the dome camera from a server, and the present application is not limited specifically. It should be noted that, in addition to the cruise time information, the dome camera also stores scene parameter information, where the scene parameter information includes a point location corresponding to each scene in a plurality of scenes to be cruising and a service bound to each scene, and of course, the scene parameter information may also be acquired by the dome camera from the server. In some possible embodiments, the cruise time information and the scene parameter information may be two separate pieces of information, or may be one piece of information that is combined, and the present application is not limited specifically.
The previous time of the current time may be any time within a preset time interval before the current time, and the preset time interval may be 20ms, 50ms, 100ms, 200ms, 350ms, 1s, 4s or other values.
For example, the cruise time information may be a cruise time table shown in table 2, and as shown in table 2, assuming that a cruise time period of a certain day is only 9:00AM to 10:00AM, the cruise time period corresponds to two scenarios, namely, scenario 1 and scenario 2, where a monitoring time period corresponding to scenario 1 is 10min and a monitoring time period corresponding to scenario 2 is 20min, and the cruise order of scenario 1 may be set to be prior to the cruise order of scenario 2 in the cruise time period of 9:00AM to 10:00 AM. Referring to table 3, table 3 is an example of scene parameter information, and it can be known from table 3 that a scene 1 binds a point 1 and a service 1, where the point 1 represents a shooting view of a dome camera in the scene 1; and a scene 2 binds a point 2 and a service 2, wherein the point 2 represents the shooting view of the dome camera in the scene 2, and the service 1 is different from the service 2. Of course, in some possible embodiments, tables 2 and 3 may be combined into one table. It can be understood that in the cruising time period of 9:00AM-10:00AM, the ball machine can alternately cruise according to scene 1-scene 2-scene 1-scene 2. In this case, the scene switching timing may be 9:00AM, 9:10AM, 9:30AM, and 9:40 AM. Taking the current time as 9:00AM as an example, the scene corresponding to the current time is scene 1, in this case, the previous time may be 8:59:00AM, 8:59:30AM, and the like, and the present application is not limited specifically.
The steps are described with reference to the modules in the camera in fig. 1A, where the camera is a dome camera, and after the dome camera is started, the scene management module obtains the cruise time information and the scene parameter information from the memory of the dome camera and sends the cruise time information to the cruise module. And a cruising module in the dome camera detects that the current moment is the scene switching moment based on the cruising time information, and the current moment corresponds to the second scene.
TABLE 2
Figure BDA0002921767470000151
TABLE 3
Scene identification Point location Business
Scene
1 Point location 1 Service 1
Scene 2 Point location 2 Service 2
It should be noted that each point in table 3 represents a shooting view, and based on the configuration information shown in table 2 and table 3, the methods for the dome camera to perform scene switching at the scene switching time in the embodiment of the present application are summarized mainly in three ways:
firstly, the ball machine detects that the current moment is the moment of switching to the scene 2, the scene 2 is determined by the point location 2 and the service 2, the ball machine is directly switched at the current moment, namely the ball machine adjusts itself to the point location 2 to acquire a second image, and the second image is processed correspondingly to the service 2.
Wherein, this case specifically includes case a and case B:
case a: although the ball machine is located at the point location 1 at the last moment, the ball machine does not start working (namely, images are not collected on the point location 1 and the images collected on the point location 1 are not processed), and when the ball machine detects that the current moment is the moment of switching to the scene 2, the ball machine is directly switched at the current moment;
case B: the ball machine is located at the point 1 at the last moment, collects the first image and processes the first image through the service 1, and when the ball machine detects that the current moment is the moment of switching to the scene 2, the ball machine is directly switched at the current moment.
Secondly, the ball machine is located at the point location 1 at the last moment to collect the first image and process the first image as the service 1, the ball machine detects that the current moment is the moment of switching to the scene 2, the scene 2 is determined by the point location 2 and the service 2, but the first image is collected at the current moment or the first image is not processed yet, under the condition, the ball machine executes the switching after the first image is processed, namely the ball machine adjusts itself to the point location 2 to collect the second image, and processes the second image corresponding to the service 2.
And thirdly, the ball machine collects a first image at the point position 1 at the last moment and processes the first image into the service 1, the ball machine detects that the current moment is the moment of switching to the scene 2, but detects a target body or a target event in the first image collected at the first moment, under the condition, the ball machine continues to collect the first image at the point position 1 and processes the service 1 on the first image, and the ball machine performs switching until the target body and/or the target event are not detected in the first image collected again, namely the ball machine adjusts itself to the point position 2 to collect a second image and processes the second image into the service 2.
It should be noted that the first case a includes an example that there is no corresponding scene at a previous time in S202-S206, the first case B includes an example that the priority of the first scene corresponding to the previous time in S202-S206 is set to be a non-detection priority and the priority of the first scene corresponding to the previous time is set to be a detection priority but no object or object event is detected in the first image, the second manner includes an example that the ball game machine waits for the first service processing to be completed and then performs scene switching in S202-S206, and the third manner includes an example that the priority of the first scene corresponding to the previous time in S202-S206 is set to be a detection priority and the detection of the object or object event in the first image results in delayed scene switching. The following specifically describes the determination process of the three scene switching manners through S202 to S206:
s202, judging whether a corresponding scene exists at the last moment.
In the embodiment of the present application, when detecting that the current time is the scene switching time and the current time corresponds to the second scene, the ball machine further needs to determine whether there is a corresponding scene at the previous time, and when determining that there is no corresponding scene at the previous time, it indicates that the ball machine is located at the first point at the previous time but the ball machine is not started to operate, and then S203 is executed; when it is determined that there is a corresponding scene at the last time, S204 is performed.
Specifically, whether the previous moment is within any cruise time period in the cruise time information or not is judged, and if the previous moment is not within any cruise time period in the cruise time information, it is determined that no corresponding scene exists at the previous moment; and if the last moment is within a certain cruising time period in the cruising time information, determining that a corresponding scene exists at the last moment.
For example, the cruise time information shown in table 2 is used as an example to describe the process of determining whether there is a corresponding scene at the previous time: during the cruise time period of 9:00AM-10:00AM, it can be seen that the scene cut time may be 9:00AM, 9:10AM, 9:30AM, and 9:40 AM. If the current time is 9:00AM, assuming that the previous time is 8:59AM, it is easy to know that the previous time is not located in the unique cruising time period of 9:00AM-10:00AM shown in Table 2, so that no corresponding scene exists at the previous time; if the current time is 9:10AM, assuming that the previous time is 9:09AM, since 9:09AM is located in the cruise time period of 9:00AM to 10:00AM, it is determined that the previous time has a corresponding scene, and the previous time corresponds to scene 1.
The above-mentioned steps are described with reference to the modules in the camera in fig. 1A, when the cruise module detects that the current time is the scene switching time and the current time corresponds to the second scene based on the cruise time information, the cruise module sends a scene switching request to the scene management module to obtain the second point location and the identifier of the second service corresponding to the second scene, where the scene switching request includes the identifier of the second scene. The scene management module judges whether a corresponding scene exists at the previous moment or not by combining the cruise time information after receiving the scene switching request from the cruise module, and when the fact that no corresponding scene exists at the previous moment is determined, the scene management module sends a second point location to the cruise module and sends a first instruction to the service management module, wherein the first instruction comprises an identifier of a second service, and the first instruction is used for indicating the service module to start the second service according to the identifier of the second service. It should be noted that the identifier of the second point and the identifier of the second service are found by the scene management module from the scene parameter information according to the identifier of the second scene. When determining that there is a corresponding scene at the last time, for example, a first scene, the processing procedure of the scene management module may refer to the following description.
S203, starting a second service and switching from the first point to the second point to acquire a second image.
In the embodiment of the present application, when it is determined that there is no corresponding scene at the previous time, that is, it indicates that the dome camera is located at the first point at the previous time but the dome camera is not started to operate, the dome camera determines to perform a scene switching operation at the current time, that is, start a second service and switch from the first point to the second point to acquire a second image. It is to be understood that this example corresponds to the case a in the first mode described above.
It should be noted that the second point location and the identifier of the second service may be found by the dome camera from the scene parameter information according to the identifier of the second scene, and the scene parameter information may be stored in the dome camera in advance.
For example, taking the above tables 2 and 3 as an example, if the current time is 9:00AM (scene switching time), the current time corresponds to scene 1, and it is easy to know that there is no corresponding scene at the previous time according to the cruise time information shown in table 2, in this case, the ball machine determines to perform scene switching at the current time, and the ball machine can know that scene 1 corresponds to point 1 and service 1 according to table 3, in this case, the ball machine starts service 1, and adjusts its point location to point 1 to collect the first image, and within the monitoring duration corresponding to scene 1, that is, the ball machine is 9:00AM-9:10AM, and performs processing of service 1 on the first image collected at point 1.
It should be noted that, in the case that the second service includes multiple services, after the second image is collected at the second point, the multiple services may be simultaneously executed on the second image, or each service of the multiple services may be sequentially executed on the second image, which is not limited in this application. For example, if 9:00AM-9:10AM corresponds to scene 1, and it is assumed that scene 1 binds point 1, service 1, and service 2, after a scene image is collected on point 1, the scene image may be simultaneously subjected to detection corresponding to service 1 and detection corresponding to service 2, the scene image may be subjected to detection corresponding to service 1 first and then detection corresponding to service 2 is performed, detection of service 2 may be performed at 9:00AM-9:05AM, and detection of service 1 may be performed at 9:05AM-9:10 AM.
The steps are described with reference to modules in the camera in fig. 1A, the service module receives a first instruction sent by the scene management module, the first instruction includes an identifier of a second service, and the cruise module receives a second point location sent by the scene management module, then the service module starts the second service according to the identifier of the second service in the first instruction, and the cruise module switches the ball machine from the first point location at the last moment to the second point location.
It should be noted that, for the time delay of switching the ball machine from the second point location to the first point location, the embodiment of the present application may be ignored.
S204, when the last moment corresponds to the first scene, the first service is closed, the second service is started, and the second image is acquired by switching from the first point location to the second point location.
In this embodiment of the application, if the previous time corresponds to the first scene and the first scene is bound to the first point location and the first service, in this case, the first service is closed, the second service is started, and the second image is acquired by switching from the first point location to the second point location.
It should be noted that, the process of the ball machine starting the second service and switching to the second point location may specifically refer to the description related to S203, which is for brevity of the example description and is not repeated herein.
It should be noted that, in the embodiment of the present application, the execution sequence of the first service being turned off, the second service being turned on, and the switching from the first point to the second point is not specifically limited, and for example, the three operations may be executed first and then or simultaneously.
For example, taking the above tables 2 and 3 as an example, if the current time is 9:10AM (scene switching time), the current time corresponds to the scene 2 according to the table 2, and if the previous time (for example, 9:09AM) is assumed, the previous time corresponds to the scene 1 and the scene 1 is easily known to be bound with the point 1 and the service 1 according to the table 2, the ball machine determines to perform scene switching at the current time, the ball machine knows the scene 2 corresponds to the point 2 and the service 2 according to the tables 2 and 3, and the ball machine can close the service 1, then open the service 2, and adjust itself from the point 1 to the point 2 to collect a good scene image.
In some possible embodiments, if the previous time corresponds to the first scene, the first scene is bound to the first point location and the first service, and the ball machine detects that the current time is the time for switching to the second scene, and the second scene corresponds to the second point location and the second service, but the first image is being acquired at the current time or in the process of executing the first service on the first image, in this case, the ball machine closes the first service after the first image is processed, starts the second service, and switches from the first point location to the second point location to acquire the second image. It is to be understood that this example corresponds to the second mode described above.
The steps are described with reference to modules in the camera in fig. 1A, a scene management module corresponds to a first scene at the last time, the scene management module determines, based on scene parameter information, that the first scene is bound with a first service and a first point, so that a second instruction is sent to a service module, the second instruction includes an identifier of the first service, the second instruction is used to instruct the service module to close the first service, in addition, the scene management module also sends the first point to a cruise module and sends the first instruction to the service module, the first instruction includes an identifier of the second service, and the first instruction is used to instruct the service module to start the second service. Correspondingly, after receiving the second instruction, the service module closes the first service according to the second instruction, and after receiving the first instruction, starts the second service according to the first instruction, and switches the ball machine from the first point to the second point at the last moment.
It should be noted that, the embodiment of the present application ignores the time delay caused by the time consumed for starting or closing the service.
S205, when the priority corresponding to the first scene is detection priority, whether a target body or a target event is detected in the first image is judged.
Optionally, in some possible embodiments, the scene parameter information further includes a priority corresponding to each scene, where the priority includes a detection priority and a non-detection priority, and the priority corresponding to each scene indicates a priority of a service corresponding to each scene. In this case, the dome camera further needs to determine whether the priority level corresponding to the first scene is detection priority or not when determining that the last time corresponds to the first scene, and also needs to determine whether a target body or a target event is detected after executing the first service on the first image because the first scene is bound with the first point and the first service when determining that the priority level corresponding to the first scene is detection priority, where the first image is acquired by the dome camera at the first point at the last time. It should be noted that, if the first service is used for detecting the target, it is only necessary to determine whether the target is detected in the second image; if the first service is used for detecting the target event, only judging whether the target event is detected in the first image; if the first service is used for detecting the target body and the target event, whether the target body or the target event is detected in the first image needs to be judged. In summary, if the ball machine does not detect the target object or the target event or does not detect the target object and the target event in the first image, the above step S204 is executed; if the dome camera detects a target object or a target event in the first image, S206 is performed.
Taking the example that the target body is not detected in the first image, the fact that the target body is not detected in the first image by the dome camera includes the following cases: (1) the first image only comprises a single-frame image, a first service is executed on the single-frame image to carry out detection processing, and a target body is not detected; (2) the first image comprises a plurality of frames of images, and a first service is executed on each frame of image in the plurality of frames of images, wherein a target body is not detected in the last frame of image in the plurality of frames of images; (3) the first image comprises a plurality of frames of images, and the first service is executed on each frame of image in the plurality of frames of images, and the target body is not detected in each frame of image in the plurality of frames of images. Similarly, the above description about the object not detected in the first image can be referred to for the object not detected in the first image and the object event.
Wherein the target object, the target event and the service correspond. For example, if the service is face detection, the target body is a face; if the service is vehicle detection, the target body is a vehicle; if the service is the man-machine detection, the target bodies are motor vehicles, non-motor vehicles and pedestrians; if the service is vehicle illegal parking detection, the target event is vehicle illegal parking; and if the service is vehicle collision detection, the target event is vehicle collision. If the service comprises face detection and vehicle detection, the target body comprises a face and a vehicle. In some possible embodiments, if the traffic is parking violation detection and face detection, the target is a face and the target event is a vehicle parking violation. For the description of the target and the target event, reference may be made to the related description in S101, and details are not repeated here.
In some possible embodiments, when the ball machine detects that the current time is the time for switching to the second scene, and the second scene is bound to the second point location and the second service, if the previous time corresponds to the first scene, the first scene is bound to the first point location and the first service, and if the priority of the first service is non-detection priority, the above S204 is executed, that is, the first service is closed, the second service is started, and the second image is collected by switching from the first point location to the second point location. It is to be understood that this example corresponds to the case B in the first mode described above.
It should be noted that, when a certain scenario is bound to multiple services, the priority settings of the multiple services may be the same, and the priority corresponding to the scenario is either detection priority or non-detection priority. In some possible embodiments, in a case that a certain scenario is bound with multiple services, priority settings of the multiple services may also be different, and the priority corresponding to the scenario includes a detection priority and a non-detection priority, in this case, the condition that the priority corresponding to the first scenario is determined to be the detection priority means that there is a detection priority in the priorities corresponding to the first scenario.
S206, delaying to close the first service and delaying to switch to the second scene.
Optionally, in this embodiment of the application, when the ball machine determines that the last time corresponds to the first scene and the priority level corresponding to the first scene is detection priority, the first scene is bound to the first point location and the first service, and if a target object or a target event is detected in the first image collected at the first point location, the first service is delayed to be closed and the second scene is delayed to be switched, where the delayed switching to the second scene refers to the delayed starting of the second service and the delayed switching from the first point location to the second point location to collect the second image. In one implementation, the first service may be delayed from being turned off until no target volume and/or target event is detected in the first image acquired again by the dome camera at the first point location. It is to be understood that this example corresponds to the third mode described above.
It should be noted that, since the first service is turned off with a delay, the timing at which the dome camera switches to the second scene is also delayed. And if the current time reaches the stop time of a certain cruise time period and no corresponding scene exists at the current time, closing the service of the dome camera in operation, and stopping the operation of the dome camera until the start time of the next cruise time period.
For example, referring to table 4, table 4 adds "priority" setting information to table 3, and it can be seen from table 4 that the priority of service 1 is "detection priority" and the priority of service 2 is "non-detection priority". Further describing S205 and S206 with reference to table 2 and table 4, if the current time is 9:10AM (scene switching time) at the time of switching to the scene 2, the scene 1 corresponding to the previous time (e.g. 9:09AM) is detected, the priority corresponding to the scene 1 is obtained according to table 4 as the detection priority, and it is assumed that the service 1 is face detection, that is, the target is a human face:
in this case, in a specific implementation, the dome camera checks whether a face is detected in the first image collected at point 1 at the latest time from the current time 9:10AM, and if the dome camera detects a face in the first image, in this case, the dome camera does not turn off the service 1 temporarily, the dome camera continues to collect the first image at point 1 and perform face detection on the first image, assuming that no face is detected in the first image collected at 9:12AM, in other words, starting from 9:10AM to 9:12AM, the dome camera is always located at point 1 to collect the first image, and performs face detection on the first image and all the detection results are that a face is detected, but the face is not detected in the first image collected at point 1 at 9:12AM, the dome camera may turn off the service 1 at 9:12, start the service 2, and adjust the dome camera from point 1 to point 2, that is, the dome camera delays to 9:12AM to complete the switch from scene 1 to scene 2;
in another specific implementation, the dome camera checks whether a face is detected in the first image collected at the point location 1 at the latest time 9:10AM from the current time, and if no face is detected in the first image, the dome camera can perform scene switching operation at 9:10AM, that is, firstly, the service 1 is closed, then the service 2 is opened, and the dome camera adjusts itself from the point location 1 to the point location 2 to collect the second image.
For another example, as described with reference to table 2 and table 4, it can be seen from table 2 that the cruising period is 9:00AM-10:00AM, the sum of the monitoring period (10min) corresponding to the scene 1 and the monitoring period (20min) corresponding to the scene 2 is smaller than the period (1h) corresponding to the cruising period 9:00AM-10:00AM, as shown in (1) of fig. 6, and (1) of fig. 6 is a time at which a scene switch occurs within the cruising period without delay, that is, the scene 1-scene 2-scene 1-scene 2 can be alternately cruising within 9:00AM-10:00AM, and the corresponding times are 9:00AM, 9:10AM, 9:30AM, and 9:40AM in this order. When the switching of a certain scene is delayed, the switching of the subsequent scene mainly comprises the following two implementation modes:
in one implementation, referring to (2) of fig. 6, if the time for switching from scene 1 to scene 2 for the first time is delayed from 9:10AM to 9:12AM, other scene switching times are automatically delayed, that is, the next scene switching time is changed from 9:30AM to 9:32AM, and since the priority corresponding to scene 2 is the non-detection priority, 9:32AM is switched to scene 1; if the moment of switching from the scene 1 to the scene 2 for the second time is not delayed, namely 9:42AM, starting from 9:42AM, the ball machine is located at the point position 2, although the monitoring time corresponding to the scene 2 is 20min, the stopping moment of the cruising time period of 9:00AM-10:00AM is 10:00AM, and no corresponding scene exists in the 10:00AM, so that the service 2 is closed at 10:00AM, and the ball machine stops working;
in another embodiment, referring to (3) of fig. 6, if the time for switching from scene 1 to scene 2 is delayed from 9:10AM to 9:12AM for the first time, and if no delay occurs in other scene switching times, that is, switching from 9:12AM to scene 2, the time stays at 9:30AM in scene 2, and the time for switching from scene 2 to scene 1 for the first time is still 9:30AM, it is assumed that at 9:40AM, since the priority corresponding to scene 1 is detection priority, and if no delay occurs in switching from scene 1 to scene 2 for the second time, the 9:40AM switches to scene 2, and since 10:00AM has no corresponding scene and 10:00AM is the stop time of the cruise time period 9:00AM to 10:00AM, the ball machine stops working because 10:00AM shuts off service 2.
TABLE 4
Scene identification Point location Business Priority level
Scene
1 Point location 1 Service 1 Detection priority
Scene
2 Point location 2 Service 2 Non-detection priority
S205 and S206 are described with reference to each module in the camera in fig. 1A, when the scene management module determines that the previous time corresponds to the first scene, it determines whether the priority corresponding to the first scene is detection priority, and if the priority corresponding to the first scene is non-detection priority, reference may be made to the above description of S204, which is not described herein again; if the priority corresponding to the first scene is detection priority, further determining whether a target body or a target event is detected in the first image acquired at the first point, and if the target body or the target event is not detected in the first image by the dome camera, reference may be made to the above description of S204, which is not repeated herein; and if the target body or the target event is detected in the first image, delaying to send a second instruction to the service module, wherein the second instruction is used for indicating the service module to close the first service, and delaying to send a first instruction to the service module, and the first instruction is used for indicating the service module to open the second service and delaying to send a second point location to the cruising module.
In some possible embodiments, the dome camera executes the service corresponding to the scene on the scene image collected at the point position corresponding to each scene, and after the detection result is obtained, the detection result can be sent to the server for display.
The following describes embodiments of the present application from another perspective (static):
after the configuration method described in the embodiment of fig. 2 is implemented, time-sharing multiplexing of the ball machine can be implemented, that is, the ball machine is located at different points at different times to execute different services. In addition, the ball machine can be located at the same point at different times to execute different services. Specifically, the ball machine collects a first image at a first moment and executes a first service on the first image; and the ball machine acquires a second image at a second moment and executes a second service on the second image, wherein the first moment is different from the second moment, and the first service is different from the second service. It should be noted that the dome camera is located at the same geographical location at the first time and the second time.
Description of the first service and the second service: the first service and the second service are both algorithms or programs for detecting a target object or a target event in the monitored environment. The first service includes at least one of the following services: face detection, face recognition, vehicle detection, parking violation detection, overspeed detection, intrusion detection, detection of an airplane or an unpopular person, detection of a red light violation, detection of driver irregular driving (driver's hands are off steering wheel, driver is not wearing a seat belt in the cockpit, etc.), detection of traffic accidents (vehicle collision, vehicle collision with a public facility, etc.), detection of pedestrian sickness, detection of car theft, etc. Similarly, the second service also includes at least one of the above services. It should be noted that the first service is different from the second service, for example, the first service may be face detection, and the second service may include face detection and vehicle detection, or the second service is violation detection.
In some possible embodiments, the ball machine needs to adjust the first shooting view of the ball machine at the first time to the second shooting view at the second time before the ball machine acquires the second image at the second time, in other words, the ball machine may adjust the shooting view of the ball machine before the ball machine acquires the second image at the second time.
Wherein, the shooting visual field can also be called the position, can be in order to adjust the shooting visual field of ball machine through adjusting one or more in the cloud platform of ball machine in the following parameter: the three parameters of Pan, Tilt, and Zoom can be referred to the above related descriptions, and are not described herein again.
In some possible embodiments, before the ball machine collects the first image at the first time, the ball machine receives a binding relationship between a first time period and a first service and receives a binding relationship between a second time period and a second service in advance, where the first time belongs to the first time period and the second time belongs to the second time period.
The first time is any time within a first time period (including a start time and an end time of the first time period), and the second time is any time within a second time period (including a start time and an end time of the second time period). The first time period and the second time period may be two adjacent time periods, or two separated time periods, and the application is not particularly limited.
For example, assuming that 8:00AM-8:20AM and a first shooting view, a first service binding, and 8:30AM-8:40AM and a second shooting view, a second service binding are set in implementing the configuration method described in the above embodiment of fig. 2, it can be understood that 8:00AM-8:20AM is a first time period, 8:30AM-8:40AM is a second time period, and in this case, the first time period and the second time period are two time periods separated from each other. For another example, assuming that 8:00AM-8:10AM and first shooting view, first service binding, and 8:10AM-8:30AM and second shooting view, second service binding are set after the configuration of fig. 3, it can be understood that 8:00AM-8:10AM is a first time period, 8:10AM-8:30AM is a second time period, and in this case, the first time period and the second time period are two adjacent time periods. In some possible embodiments, the first time period and the second time period may also be two adjacent time periods within the same cruise time period.
It should be noted that, if the concept of the scene in S201-S206 corresponds to, since the scene is determined by both the point location (or referred to as the shooting view) and the service, the first shooting view and the first service define one scene, and the second shooting view and the second service define one scene, it can be understood that, since the first time period corresponds to the first shooting view and the first service, and the second time period corresponds to the second shooting view and the second service, the scene corresponding to the first time period is different from the scene corresponding to the second time period.
In some possible embodiments, the ball machine may also execute different services at the same point in time. Specifically, the ball machine collects a first image in a first shooting view at a first moment, and executes a first service on the first image; and the ball machine acquires a third image in the first shooting view at a second moment, and executes a second service on the third image, wherein the first moment is different from the second moment, and the first service is different from the third service. It can be seen from this that the first image and the third image correspond to the same shooting field of view, but the first image and the third image correspond to different services, and since the scene is determined by the shooting field of view and the services, the scene corresponding to the first time period is different from the scene corresponding to the second time period. The method and the device realize that the dome camera executes different services at the same point position in a time-sharing manner, improve the utilization rate of the dome camera and reduce the waste of hardware resources in a complex environment.
In some possible embodiments, the ball machine may be triggered unconditionally by adjusting itself to a first camera view at a first time to a second camera view at a second time. For example, when the ball machine detects that the current time is the scene switching time, the step of adjusting the shooting view of the ball machine is performed, that is, at least one of the three parameters of the pan head of the ball machine is adjusted. Specifically, reference may be made to the description of S203 and S204 in the embodiment of fig. 5, that is, it is equivalent to switching the ball machine from the second point location to the first point location, and for the sake of brevity of the description, no further description is given here.
In some possible embodiments, a trigger condition may also be set for the dome camera to adjust its own shooting view. And before the ball machine collects the second image at the second moment, the ball machine receives the instruction of adjusting the visual field, the ball machine continues to collect the first image by using the first shooting visual field, and after the first image is collected, the step of adjusting the shooting visual field of the ball machine is carried out.
For example, taking each module of the camera in fig. 1A as an example, a scene management module of the dome camera receives an instruction of sending the adjustment view by the cruise module at the scene switching time, and the scene management module detects that the first image is being acquired by the first shooting view or the first image acquired by the first shooting view is performing the service processing, in this case, when the first image is acquired or the first image is processed, the dome camera enters the step of adjusting the shooting view, so as to adjust the first shooting view at the first time to the second shooting view at the second time, that is, the scene management module sends the parameter information related to the second shooting view to the cruise module.
For another example, the scene switching module of the dome camera receives the instruction for adjusting the view field sent by the cruise module at the scene switching time, and the scene switching module detects that the priority of the first service is detection priority and detects the target object or the target event in the first image, in this case, the dome camera continues to acquire the first image with the first shooting view field and executes the first service on the first image until the target object and the target event are no longer detected in the acquired first image, that is, the first image is acquired completely, and the dome camera enters the step of adjusting the shooting view field of the dome camera, that is, the first shooting view field at the first time is adjusted to the second shooting view field at the second time.
In some possible embodiments, the identification of the second service may be obtained by the ball machine from a server. Specifically, the ball machine acquires the second image at the second time, and before executing the second service on the second image, the ball machine may further send a switching request to the server, where the switching request carries indication information indicating a second time period, and correspondingly, the server receives the switching request, and searches for an identifier of the second service and a second shooting view corresponding to the second time period from pre-stored configuration information according to the indication information in the switching request, and the server sends a switching request response to the ball machine, where the switching request response includes the identifier of the second service, or the switching request response includes the identifier of the second service and the second shooting view, so that the ball machine adjusts according to the second shooting view in the switching request response.
In some possible embodiments, the configuration information of the ball machine is stored in the ball machine in advance, and the ball machine itself may directly obtain the identifier of the second service from the memory. Specifically, a first mapping relationship and a second mapping relationship are stored in the ball game machine, wherein the first mapping relationship is a mapping relationship between a first time period and an identifier of the first service, and the second mapping relationship is a mapping relationship between a second time period and an identifier of the second service. In addition, the first mapping relationship further includes a mapping relationship between the first time period and the first photographing field of view, and the second mapping relationship further includes a mapping relationship between the second time period and the second photographing field of view. When the dome camera determines that the shooting view needs to be adjusted, the dome camera searches a second shooting view corresponding to the second time period and an identifier of the second service according to the second time period, never adjusts the shooting view of the dome camera to the second shooting view, and starts the second service according to the identifier of the second service.
It can be seen that, by implementing the embodiment of the present application, the dome camera implements, according to the pre-stored configuration information, switching between different scenes and executing service processing corresponding to each scene on the scene image collected at the point location corresponding to each scene, in other words, the dome camera implements executing different services at different point locations at different times or executing different services at the same point location at different times or executing different services at a certain point location, so as to improve the utilization efficiency of the dome camera, implement that different service requirements in different scenes or multiple service requirements in the same scene are met by as few dome cameras as possible, and effectively reduce the deployment cost of the dome camera.
Referring to fig. 7, fig. 7 shows still another image capturing method provided in the embodiment of the present application. Unlike the embodiment of fig. 5, the identifier and the second photographing view of the second service in the image capturing method shown in fig. 7 are acquired by the ball machine from the server, the embodiment of fig. 7 requires the ball machine to interact with the server, while the identifier and the second photographing view of the second service in the embodiment of fig. 5 are stored inside the ball machine, and the embodiment of fig. 5 can be executed by only one side of the ball machine. The embodiment of fig. 7 may be independent of the embodiment of fig. 5, or may be a supplement to the embodiment of fig. 5. The method includes, but is not limited to, the steps of:
s301, the ball machine acquires cruise time information.
In the embodiment of the application, after the dome camera is started, the dome camera acquires the cruise time information, wherein the cruise time information comprises the identification of a plurality of scenes to be cruising by the dome camera and the cruise time period corresponding to each scene. In some possible embodiments, when a plurality of scenes correspond to a certain cruise time period, the cruise time information further includes a monitoring time length corresponding to each scene and a cruise sequence corresponding to each scene. The cruise time information may be the cruise time transmitted from the reception server of the dome camera, or the dome camera may retrieve the cruise time information stored in advance from its own memory.
S302, the ball machine detects that the current moment is the moment of switching to the second scene, and whether the scene switching condition is met at the current moment is judged.
In this embodiment of the application, the ball machine is located at the first point at a previous time (referred to as a previous time for short) of the current time, and the previous time is any time within a preset time interval before the current time, and for the description of the previous time, reference may be made to the related description in S201, which is not described herein again. The ball machine detects that the current moment is the moment of switching to the second scene based on the cruise time information, under the condition, the ball machine needs to judge whether the current moment meets scene switching conditions, and if the current moment meets the scene switching conditions, S303 is executed; if the current time does not satisfy the scene switching condition, S307 is executed.
The scene switching condition may be any one of the following conditions:
condition a: the ball machine is located at the first point at the last moment, but the ball machine is not started to work. (i.e., no corresponding scene at the previous time).
That is, the first time does not belong to any cruise time period in the cruise time information, and therefore, the first time has no corresponding scene, that is, the ball machine stops working at the first time.
Condition B: the ball machine collects a first image at a first point at the last moment and processes a first service for the first image, wherein the priority corresponding to the first service is non-detection priority.
That is to say, the last time belongs to a certain cruising time period in the cruising time information, and the last time corresponds to a first scene, and the first scene is bound with the first point location and the first service.
Condition C: the ball machine collects a first image at a first point at the last moment and processes a first service on the first image, wherein the priority corresponding to the first service is detection priority, but a target body or a target event is not detected in the first image.
That is, there is a corresponding scene (i.e., a first scene) at the last time and the priority of the scene is set as the detection priority, but the first service performed on the first image acquired at the first point position does not detect the target body or the target event. For the description of the service, the target and the target event, reference may be made to the related descriptions in S101 and S205, and the description is not repeated here.
The ball machine determines whether any one of the three conditions (i.e., the condition a, the condition B, and the condition C) is satisfied, and if any one of the three conditions is satisfied, executes S303; if none of the above three conditions is satisfied, S307 is executed.
S303, the ball machine sends a scene switching request to the server.
In the embodiment of the application, when it is determined that the dome camera meets the scene switching condition, the dome camera sends a scene switching request to the server to obtain a second point location corresponding to a second scene and an identifier of a second service. The scene switching request carries an identifier of a second scene. Accordingly, the server receives a scene switching request sent by the dome machine.
In some possible embodiments, the scene switching request is further used to obtain the priority corresponding to the second scene from the server, and the description of the priority may specifically refer to the related description in the embodiment of fig. 5, which is not repeated herein.
S304, the server determines a second point location corresponding to the second scene and an identifier of the second service according to the identifier of the second scene.
In the embodiment of the application, after receiving a scene switching request sent by a dome camera, a server searches a second point location corresponding to a second scene and an identifier of a second service in scene parameter information according to an identifier of the second scene in the scene switching request.
The scene parameter information is pre-stored in the server, and includes a point location corresponding to each scene in a plurality of scenes to be cruising and a service bound to each scene. In some possible embodiments, the scene parameter information may further include a priority corresponding to each scene. It should be noted that the scene parameter information and the cruise time information may be included in one piece of information, for example, the cruise configuration information, or may be two pieces of information that are separated, and the application is not limited in particular.
It should be noted that the cruise time information and the scene parameter information may be automatically stored in the server after the user completes the cruise task configuration for the ball machine on the user interface provided by the server in advance.
S305, the server sends first information to the ball machine.
In the embodiment of the application, after obtaining the second point location and the identifier of the second service, the server sends first information to the ball machine, where the first information includes the identifier of the second point location and the identifier of the second service. Accordingly, the ball machine receives the first information sent by the server.
In some possible embodiments, the first information sent by the server to the ball machine further includes a priority corresponding to the second scenario. It should be noted that the priority corresponding to the second scenario may be included in the same information together with the identifier of the second point location and the second service, for example, the first information may also belong to two messages, and the application is not limited specifically.
And S306, the dome camera executes scene switching operation.
In the embodiment of the application, when the ball machine determines that the ball machine meets the scene switching condition at the current time, after receiving the first information sent by the server, the ball machine may execute the scene switching according to the second point location in the first information and the identifier of the second service at the current time.
In one specific implementation, if the ball machine satisfies the condition a in S302, the fact that the ball machine performs scene switching at the current time means that: and starting the second service according to the identifier of the second service, and switching the ball machine from the first point location to the second point location to acquire a second image.
In another specific implementation, if the ball machine satisfies the condition B in S302 or satisfies the condition C in S302, the fact that the ball machine performs scene switching at the current time means that: and closing the first service, starting the second service according to the second service identifier, and switching the ball machine from the first point location to the second point location to acquire a second image.
And S307, the dome camera delays executing the step S303 until no target body or target event is detected in the first image acquired at the first point at the target moment.
In this embodiment of the present application, if the ball machine determines that the ball machine itself does not satisfy the scene switching condition at the current time, that is, the ball machine does not satisfy the three conditions recited in S302 at the current time, it indicates that the ball machine detects the target object or the target event after executing the first service on the first image collected at the first point at the current time, so the ball machine continues to collect the first image at the first point and detects whether the target object or the target event still exists in the first image, and the ball machine delays sending the scene switching request to the server until the ball machine does not detect the target object or the target event in the first image collected at the first point at the target time, where the target time is later than the current time (that is, a certain scene switching time). In other words, if the dome camera no longer detects the target object or the target event in the first image collected at the second point location at the target time, the dome camera sends a scene switching request to the server at the target time, and accordingly, the dome camera also performs scene switching at the target time, that is, the first service is closed, the second service is started according to the identifier of the second service, and the dome camera switches from the first point location to the second point location to collect the second image.
It should be noted that, the time delay between the ball machine sending the scene switching request to the server and the ball machine receiving the first information sent by the server is negligible.
In some possible embodiments, the dome camera executes the service corresponding to the scene on the scene image collected at the point position corresponding to each scene, and after the detection result is obtained, the detection result can be sent to the server for display.
The method described in the embodiment of FIG. 7 is described below in conjunction with the modules in the system architecture shown in FIG. 1B: the cruise module in the dome camera acquires the cruise time information from the server, and the cruise module detects that the current time is the time for switching to the second scene based on the cruise time information, and determines whether the current time meets the scene switching condition, that is, whether any one of the conditions described in S302 is met, which may specifically refer to the relevant description in S302, and is not described herein again. The method comprises the steps that when a cruise module determines that a scene switching condition is met at the current moment, the cruise module sends a scene switching request to a server, the scene switching request comprises an identifier of a second scene, correspondingly, after a scene management module of the server receives the scene switching request, the scene management module responds to the scene switching request, the scene management module searches a second point location corresponding to the second scene, an identifier of a second service and a priority of the second service in scene parameter information according to the identifier of the second scene, and then the scene management module sends the second point location, the identifier of the second service and the priority of the second service to a dome camera. After receiving the identifier of the second service sent by the server, the service management module of the dome camera closes the first service (when a service corresponding to the first scene is opened) and opens the second service, and the cruise module adjusts the dome camera from the first point location to the second point location. And if the cruise module determines that the current moment does not meet the scene switching condition, delaying to send a scene switching request to the server until determining that no target body or target event is detected in the first image acquired at the first point at the target moment.
In the method shown in fig. 7, the determination process of whether the scene switching condition is satisfied and the processing process of the service both occur inside the dome camera, and when the dome camera determines that the scene switching condition is satisfied, the dome camera sends a scene switching request to the server to obtain the point location corresponding to the target scene, the identifier of the service corresponding to the target scene, and the priority corresponding to the target scene. In some possible embodiments, the determination process of whether the scene change condition is satisfied may also be performed by the server. Specifically, the ball machine obtains the cruise time information from the server, detects that the current time is the time for switching to the second scene based on the cruise time information, the ball machine is located at the first point location at the last time, the ball machine sends a first request to the server, the first request includes an identifier of the second scene, and after receiving the first request, the server determines whether the ball machine meets the scene switching condition at the current time, that is, whether any one of the conditions in S302 is met, specifically, reference may be made to the relevant description in S302, and details are not repeated here. It should be noted that, when the dome camera is located at each point location, the dome camera sends the scene image acquired at each point location and the detection result of the scene image to the server, so the server may determine whether a target body or a target event is detected in the scene image based on the detection result of the scene image, if the server determines that the dome camera meets the scene switching condition at the current time, the server responds to the first request and sends first information to the dome camera, where the first information includes a second point location corresponding to the second scene, an identifier of the second service, and a priority of the second service, the dome camera performs the scene switching at the current time according to the received first information, and the specific process of the scene switching may refer to the relevant description of S305, which is not described herein again. And if the server determines that the dome camera does not meet the scene switching condition at the current moment, delaying the response of the first request until determining that the dome camera does not detect a target body or a target event in a first image collected at a first point at the target moment.
By implementing the embodiment of the application, the dome camera acquires the identifier of each scene in the plurality of scenes to be cruising and the cruising time period corresponding to each scene from the server to realize the switching of different scenes, wherein each scene is bound with one point location and at least one service, so that different service requirements in different scenes or multiple service requirements in the same scene are met by the dome camera as few as possible, the service efficiency of the dome camera is improved, and the deployment cost of the dome camera is effectively reduced.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
As shown in fig. 8, the apparatus 30 at least includes a processor 301, a memory 302, a communication interface 303, an input/output interface 304, an input/output device 305, and a bus 300, wherein the memory 302, the input/output interface 304, and the input/output device 305 are respectively connected to the processor 301 through the bus 300. The device 30 may be the server in fig. 1A.
The bus 300 is used for transferring information between the components of the device 30, and the bus 300 may be connected by a wire or wirelessly, which is not limited in this application.
The specific implementation of the processor 301 executing the operations may refer to the specific operations of performing scene configuration on each scene to be cruising, performing cruise configuration on each scene, and the like in the above method embodiment. Processor 301 may be comprised of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and hardware chips. The hardware chip may be an Application-Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), General Array Logic (GAL), or any combination thereof. The device 30 may also be the server in fig. 1B, and in some possible embodiments, the processor 301 is further configured to determine whether the ball machine satisfies the scene change condition at the current time.
Memory 302 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory 302 may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory 302 may also include a combination of the above categories. The memory 302 may store programs and data, wherein the stored programs include: cruise configuration programs, etc., the stored data including: cruise time information, scene parameter information, scene images, and the like. The memory 302 may be separate or integrated within the processor 301. When the device 30 is the server in fig. 1B, the programs stored in the memory 302 further include a program for determining whether a scene change condition is satisfied.
Communication interface 303 enables communication with camera 40 of fig. 9 using transceiver means such as, but not limited to, a transceiver, and communication interface 303 may be interconnected with camera 40 in a wired or wireless manner and may be used to transmit cruise time information, or cruise time information and scene parameter information, to camera 40. When the device 30 is the server in fig. 1B, the communication interface 303 is further configured to receive a scene switching request sent by the camera 40 and send first information to the camera 40, where the first information includes a second point location (i.e., a second shooting view) corresponding to a second scene, an identifier of a second service, and a priority of the second service.
The input/output interface 304 is connected to the input/output device 305, and is used for receiving input information and outputting an operation result. The input/output device may be a mouse, a keyboard, a display screen, etc., wherein the display screen is used to display a configuration user interface of the ball machine, such that a user can complete the cruise configuration of the ball machine through the mouse, keyboard, etc. In some possible embodiments, the display screen may also be a touch display screen, and the present application is not limited in particular.
Moreover, fig. 8 is merely an example of one device 30, and device 30 may include more or fewer components than shown in fig. 8, or have a different arrangement of components. Also, the various components illustrated in FIG. 8 may be implemented in hardware, software, or a combination of hardware and software.
In the embodiment of the present application, the apparatus 30 is used to implement the method described in the embodiment of fig. 2 and the server-side method described in the embodiment of fig. 7.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a camera according to an embodiment of the present application.
As shown in fig. 9, the camera 40 at least includes a lens 400, a sensor 401, a processor 402 and a pan/tilt head 403, wherein the lens 400, the sensor 401 and the pan/tilt head 403 are respectively connected to the processor 402, and the pan/tilt head 403 is further connected to the lens. The camera 40 may be the camera of fig. 1A or 1B.
The lens 400 is used for collecting light and imaging an external scene on the sensor 401, and the lens 400 may be threaded and generally consists of a group of lenses and a diaphragm. The lens 400 may be a standard lens, a telephoto lens, a zoom lens, or a variable focus lens, and the like, and the material of the lens 400 may be glass or plastic, which is not particularly limited in this application.
The sensor 401 is configured to perform photoelectric conversion on light collected by the lens 400, specifically, perform photoelectric conversion on first light collected at a first time to generate a first image, and perform photoelectric conversion on second light collected at a second time to generate a second image. The sensor 401 may be an image sensor, such as a Charge Coupled Device (CCD) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor.
The processor 402 is configured to perform a first transaction on a first image and a second transaction on a second image. Processor 402 may be comprised of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and hardware chips. The hardware chip may be an Application-Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), General Array Logic (GAL), or any combination thereof.
The pan/tilt head 403 is used to adjust the shooting view of the camera 40, that is, a first shooting view corresponding to a first time is adjusted to a second shooting view corresponding to a second time.
Moreover, fig. 9 is merely an example of one camera 40, and camera 40 may include more or fewer components than shown in fig. 9, or have a different configuration of components. Also, the various components illustrated in FIG. 9 may be implemented in hardware, software, or a combination of hardware and software.
In the embodiment of the present application, the camera 40 is used to implement the method described in the embodiment of fig. 5 and the method described in the embodiment of fig. 7 on the ball machine side.
Referring to fig. 10, fig. 10 is a schematic functional structure diagram of an apparatus provided in an embodiment of the present application, and the apparatus 31 includes an acquisition unit 310 and a processing unit 311. The means 31 may be implemented by means of hardware, software or a combination of hardware and software.
The acquisition unit 310 is configured to acquire a first image at a first time, and the processing unit 311 is configured to execute a first service on the first image; the collecting unit 310 is further configured to collect a second image at a second time, and the processing unit 311 is further configured to execute a second service on the second image, where the first time is different from the second time, and the first service is different from the second service.
The functional modules of the apparatus 31 may be used to implement the method described in the embodiment of fig. 5. In the embodiment of fig. 5, the acquisition unit 310 may be configured to perform S203 and S204, and the processing unit 311 may be configured to perform S201-S206. The functional modules of the apparatus 31 can also be used to implement the method described in the embodiment of fig. 7, and are not described herein again for brevity of the description.
Referring to fig. 11, fig. 11 is a functional structure diagram of an apparatus provided in an embodiment of the present application, and the apparatus 41 includes a configuration unit 410 and a sending unit 411. Optionally, in some possible embodiments, the apparatus 41 further comprises a receiving unit 412. The means 41 may be implemented by means of hardware, software or a combination of hardware and software.
The configuration unit 410 is configured to set a first time period, and bind the first time period with a first shooting view and a first service; the configuration unit 410 is further configured to set a configuration unit, and is further configured to bind the second time period with the second shooting view and the second service; the sending unit 411 is configured to send configuration information to the camera, where the configuration information includes a first mapping relationship and a second mapping relationship, where the first mapping relationship is a mapping relationship between a first time period and an identifier of a first service, and the second mapping relationship is a mapping relationship between a second time period and an identifier of a second service. In some possible embodiments, the receiving unit 412 is configured to receive the cruise time period input through the user interface, and divide the cruise time period into a plurality of time periods, where the plurality of time periods includes a first time period and a second time period, and the plurality of time periods further includes: and executing the time periods of the services corresponding to the first time period and the second time period alternately according to the length of the first time period and the length of the second time period.
The functional modules of the apparatus 41 may be used to implement the method described in the embodiment of fig. 2. In the fig. 2 embodiment, configuration unit 410 may be used to perform S101 and S102.
In the embodiments described above, the descriptions of the respective embodiments have respective emphasis, and reference may be made to related descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
It should be noted that all or part of the steps in the methods of the above embodiments may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an optical Disc (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM), or other Programmable Read-Only memories (ROM, CD-ROM), Disk storage, tape storage, or any other medium readable by a computer that can be used to carry or store data.
The technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be implemented in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions to enable a device (which may be a personal computer, a server, or a network device, a robot, a single chip, a robot, etc.) to execute all or part of the steps of the method according to the embodiments of the present application.

Claims (22)

1. An image pickup method applied to a camera, the method comprising:
the camera collects a first image at a first moment and executes a first service on the first image;
the camera collects a second image at a second moment, and executes a second service on the second image, wherein the second moment is different from the first moment, and the second service is different from the first service.
2. The method of claim 1, wherein prior to the camera acquiring a second image at a second time, the method further comprises:
the first shooting view of the camera at a first time is adjusted to a second shooting view at a second time.
3. The method of claim 2, wherein the adjusting the first camera view at the first time to the second camera view at the second time comprises:
adjusting the shooting visual field of the camera by adjusting one or more of the following parameters of the camera's pan-tilt:
pan, Tilt, and Zoom.
4. The method of claim 2, wherein prior to the camera acquiring the second image at the second time, the method further comprises:
receiving an instruction to adjust a field of view;
and continuing to acquire the first image in the first shooting view, and entering the step of adjusting the shooting view of the camera when the acquisition of the first image is finished.
5. The method according to any of claims 2-4, wherein the first service comprises at least one of:
face recognition, vehicle recognition, man-in-the-air detection, parking violation detection, overspeed detection, red light running detection, intrusion detection, pedestrian stun detection, vehicle collision detection, car theft detection, and fighting detection.
6. The method of any of claims 2-5, wherein prior to the camera acquiring a second image at a second time and performing a second transaction on the second image, the method further comprises:
sending a switching request to a server;
receiving a switching request response sent by the server, wherein the switching request response comprises an identifier of the second service; or, the switching request response includes an identifier of the second service and the second shooting view, and the identifier of the second service and the second shooting view correspond to the second time respectively.
7. The method of any of claims 1-5, wherein prior to the camera acquiring the first image at the first instance in time, the method further comprises:
receiving a binding relationship between a first time period and the first service, wherein the first time belongs to the first time period;
and receiving a binding relationship between a second time period and the second service, wherein the second time belongs to the second time period.
8. A camera configuration method is applied to a server, and comprises the following steps:
setting a first time period, and binding the first time period with a first shooting view and a first service;
setting a second time period, and binding the second time period with a second shooting view and a second service;
and sending configuration information to a camera, wherein the configuration information comprises a first mapping relation and a second mapping relation, the first mapping relation is a mapping relation between the first time period and the identifier of the first service, and the second mapping relation is a mapping relation between the second time period and the identifier of the second service.
9. The method of claim 8,
the first mapping relationship further includes: a mapping relationship between the first time period and the first photographing field of view;
the second mapping relationship further includes: a mapping relationship between the second time period and the second photographing field of view.
10. The method of claim 8, further comprising:
receiving a cruise time period input through a user interface, dividing the cruise time period into a plurality of time periods, the plurality of time periods including the first time period and the second time period, the plurality of time periods further including: and alternately executing the time periods of the services corresponding to the first time period and the second time period according to the length of the first time period and the length of the second time period.
11. An apparatus for imaging, the apparatus comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image at a first moment and executing a first service on the first image;
the acquisition unit is further configured to acquire a second image at a second time, and execute a second service on the second image, where the second time is different from the first time, and the second service is different from the first service.
12. The apparatus of claim 11, wherein the processing unit is further configured to:
the first shooting view of the camera at a first time is adjusted to a second shooting view at a second time.
13. The apparatus according to claim 12, wherein the processing unit is specifically configured to:
adjusting the shooting visual field of the camera by adjusting one or more of the following parameters of the camera's pan-tilt:
pan, Tilt, and Zoom.
14. The apparatus of claim 12, further comprising:
a receiving unit for receiving an instruction to adjust a field of view;
the processing unit is used for continuously collecting the first image at the first shooting visual angle, and when the first image is completely collected, the step of adjusting the shooting visual field of the camera is carried out.
15. The apparatus according to any of claims 12-14, wherein the first service comprises at least one of:
face recognition, vehicle recognition, man-in-the-air detection, parking violation detection, overspeed detection, red light running detection, intrusion detection, pedestrian stun detection, vehicle collision detection, car theft detection, and fighting detection.
16. The apparatus according to any one of claims 12-15, further comprising:
a sending unit, configured to send a handover request to a server;
the receiving unit is further configured to receive a handover request response sent by the server, where the handover request response includes an identifier of the second service; or, the switching request response includes an identifier of the second service and the second shooting view, and the identifier of the second service and the second shooting view correspond to the second time respectively.
17. The apparatus according to any of claims 11-15, wherein the receiving unit is further configured to:
receiving a binding relationship between a first time period and the first service, wherein the first time belongs to the first time period;
and receiving a binding relationship between a second time period and the second service, wherein the second time belongs to the second time period.
18. An apparatus for camera deployment, the apparatus comprising:
the configuration unit is used for setting a first time period and binding the first time period with a first shooting view and a first service;
the configuration unit is further configured to set a second time period, and bind the second time period with a second shooting view and a second service;
a sending unit, configured to send configuration information to a camera, where the configuration information includes a first mapping relationship and a second mapping relationship, where the first mapping relationship is a mapping relationship between the first time period and an identifier of the first service, and the second mapping relationship is a mapping relationship between the second time period and an identifier of the second service.
19. The apparatus of claim 18,
the first mapping relationship further includes: a mapping relationship between the first time period and the first photographing field of view;
the second mapping relationship further includes: a mapping relationship between the second time period and the second photographing field of view.
20. The apparatus of claim 18, wherein the receiving unit is further configured to:
receiving a cruise time period input through a user interface, dividing the cruise time period into a plurality of time periods, the plurality of time periods including the first time period and the second time period, the plurality of time periods further including: and alternately executing the time periods of the services corresponding to the first time period and the second time period according to the length of the first time period and the length of the second time period.
21. A camera, characterized in that the camera comprises:
the device comprises a lens, a sensor and a control module, wherein the lens is used for collecting first light at a first moment, and the sensor is used for carrying out photoelectric conversion on the first light to generate a first image;
the lens is further used for collecting second light at a second moment, and the sensor is further used for performing photoelectric conversion on the second light to generate a second image, wherein the second moment is different from the first moment;
a processor: the image processing device is used for executing a first service on the first image and executing a second service on the second image, wherein the first service and the second service are different.
22. The camera of claim 20, further comprising:
the cloud platform for the adjustment the shooting field of vision of camera specifically includes: and adjusting a first shooting view corresponding to the first time to a second shooting view corresponding to the second time.
CN202110120236.9A 2021-01-28 2021-01-28 Camera shooting method and device Pending CN114827436A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110120236.9A CN114827436A (en) 2021-01-28 2021-01-28 Camera shooting method and device
PCT/CN2021/142238 WO2022161080A1 (en) 2021-01-28 2021-12-28 Photographic method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110120236.9A CN114827436A (en) 2021-01-28 2021-01-28 Camera shooting method and device

Publications (1)

Publication Number Publication Date
CN114827436A true CN114827436A (en) 2022-07-29

Family

ID=82525729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110120236.9A Pending CN114827436A (en) 2021-01-28 2021-01-28 Camera shooting method and device

Country Status (2)

Country Link
CN (1) CN114827436A (en)
WO (1) WO2022161080A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116437216B (en) * 2023-06-12 2023-09-08 湖南博信创远信息科技有限公司 Engineering supervision method and system based on artificial intelligence data processing and visual analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245125A1 (en) * 2009-03-30 2010-09-30 Lasercraft, Inc. Systems and Methods For Surveillance and Traffic Monitoring (Claim Set I)
WO2014007762A1 (en) * 2012-07-04 2014-01-09 Tan Seow Loong A method and system for automated monitoring of traffic
CN107590834A (en) * 2017-08-10 2018-01-16 北京博思廷科技有限公司 A kind of road traffic accident video detecting method and system
CN111192426A (en) * 2020-01-14 2020-05-22 中兴飞流信息科技有限公司 Railway perimeter intrusion detection method based on anthropomorphic visual image analysis video cruising

Also Published As

Publication number Publication date
WO2022161080A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
JP6311646B2 (en) Image processing apparatus, electronic mirror system, and image processing method
US11442171B2 (en) Threat detection and notification system for public safety vehicles
JP2001233150A (en) Danger judging device for vehicle and periphery monitoring device for vehicle
KR20180046798A (en) Method and apparatus for real time traffic information provision
WO2016147581A1 (en) Monitoring device, monitoring method, monitoring program, and monitoring system
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
US10994749B2 (en) Vehicle control method, related device, and computer storage medium
US10225525B2 (en) Information processing device, storage medium, and control method
US10937319B2 (en) Information provision system, server, and mobile terminal
CN112001208A (en) Target detection method and device for vehicle blind area and electronic equipment
CN113111682A (en) Target object sensing method and device, sensing base station and sensing system
RU120270U1 (en) PEDESTRIAN CROSSING CONTROL COMPLEX
KR20190050113A (en) System for Auto tracking of moving object monitoring system
US20220189038A1 (en) Object tracking apparatus, control method, and program
CN114827436A (en) Camera shooting method and device
JP2020170319A (en) Detection device
CN111833637A (en) Vehicle monitoring method, device, equipment and system
KR101780929B1 (en) Image surveillence system for moving object
US11070714B2 (en) Information processing apparatus and information processing method
EP3349201B1 (en) Parking assist method and vehicle parking assist system
CN113611131B (en) Vehicle passing method, device, equipment and computer readable storage medium
CN114915761A (en) Linkage monitoring method and monitoring linkage device
CN113990073B (en) Traffic intersection-oriented radar vision cooperation method, device, equipment and medium
JP7393930B2 (en) Information processing device, distribution system, and distribution method
JP6566067B2 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination