CN116471477A - Method for debugging camera and related equipment - Google Patents

Method for debugging camera and related equipment Download PDF

Info

Publication number
CN116471477A
CN116471477A CN202210312374.1A CN202210312374A CN116471477A CN 116471477 A CN116471477 A CN 116471477A CN 202210312374 A CN202210312374 A CN 202210312374A CN 116471477 A CN116471477 A CN 116471477A
Authority
CN
China
Prior art keywords
area
camera
guide frame
shooting
debugging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210312374.1A
Other languages
Chinese (zh)
Inventor
吴志豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2022/120798 priority Critical patent/WO2023134215A1/en
Publication of CN116471477A publication Critical patent/CN116471477A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A method of commissioning a camera, the method comprising: acquiring a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture comprises a debugging target which is a shooting target arranged in the monitoring scene and used for debugging the camera; and displaying a guide frame on the shooting picture, wherein the guide frame is used for guiding a user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping ratio of a first area and a second area meets the requirement of the user, the first area is an area where a debugging target is located in the shooting picture, and the second area is an area where the guide frame is located in the shooting picture. The method for debugging the camera is used for improving the debugging efficiency of the camera and improving the debugging effect of the camera.

Description

Method for debugging camera and related equipment
The present application claims priority from the national intellectual property agency, application number 202210028167.3, application name "a method, apparatus and system for rapid tuning of snapshot cameras," filed on day 1 and 11 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the field of cameras, in particular to a method for debugging a camera and related equipment.
Background
With the rapid development of imaging technology and artificial intelligence (artificial intelligence, AI), current cameras also typically support AI capabilities, e.g., a camera can identify a target and can snap-shoot the identified target.
Before the camera works, a user (e.g., an installer of the camera) may first install the camera in a suitable position, thereby allowing the camera to capture a preset monitoring scene. The user then needs to commission the camera so that it can then be snapped to a target (e.g. a person or a vehicle) in the monitored scene with a higher quality when working. In the process of debugging, a debugging target serving as a shooting target is usually preset in the monitoring scene, and then the shooting angle and shooting parameters of the camera are repeatedly adjusted by a user based on the shooting picture and manual experience of the camera, so that the position, the size, the angle and the like of the debugging target in the shooting picture of the camera meet the requirements.
It can be seen that during this debugging process, the efficiency is extremely low; moreover, since the process is adjusted based on human experience, the final debugging effect is not guaranteed.
Disclosure of Invention
The embodiment of the application provides a method and related equipment for debugging a camera, which are used for improving the debugging efficiency of the camera and improving the debugging effect of the camera. The embodiment of the application also provides a corresponding computer readable storage medium, a device for debugging the camera and the like.
A first aspect of the present application provides a method for debugging a camera, comprising: acquiring a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture comprises a debugging target which is a shooting target arranged in the monitoring scene and used for debugging the camera; and displaying a guide frame on the shooting picture, wherein the guide frame is used for guiding a user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping ratio of a first area and a second area meets the requirement of the user, the first area is an area where a debugging target is located in the shooting picture, and the second area is an area where the guide frame is located in the shooting picture.
The camera continuously acquires shooting pictures during debugging, so that a user can debug the camera based on the shooting pictures and the guide frames in the shooting pictures, and when the camera is debugged to finish starting work, the snapshot target is snapshot, and the snapshot pictures are obtained. The region where the guide frame is located may be understood as an ideal or desired region of the capturing object in the capturing image when the camera works, so that the higher the overlap ratio between the first region and the second region, the closer the capturing angle and the capturing parameter of the camera to the ideal angle and parameter are. Therefore, in the above first aspect, by displaying the guide frame on the shooting frame of the camera, the user can intuitively see the comparison condition of the debugging target and the guide frame in the shooting frame during the debugging process, so that the user can be guided to quickly adjust the shooting angle and/or shooting parameters of the camera, and the obtained debugging target and the guide frame in the shooting frame are overlapped as much as possible, thereby improving the debugging efficiency of the camera. In addition, in the scheme, because each user debugs the camera according to the guide frame, the standards are uniform, and the problem that the debugging effect of the camera cannot be ensured due to uneven human experience of different users can be avoided.
In a possible implementation manner of the first aspect, displaying the guide frame on the photographing screen includes: and acquiring the target type of the debugging target, and displaying a guide frame on a shooting picture, wherein the type of the guide frame is determined based on the target type.
Since the debug target is typically set based on actual snapshot requirements, for example, if a person in a monitored scene is to be snapped, the debug target is a person (in this case, the target type is a person), and if a vehicle in the monitored scene is to be snapped, the debug target is a vehicle (in this case, the target type is a vehicle). Thus, in this possible implementation, different types of guide frames may be determined based on different target types, and different shapes of the different types of guide frames may be different, so that the outline of the guide frame is more matched with the outline of the actual snapshot target, that is, the ideal or desired area is more accurate, and thus the debugging effect may be further improved.
In a possible implementation manner of the first aspect, the steps are as follows: displaying the guide frame on the photographing screen includes: acquiring the resolution of a camera; a guide frame is displayed on the photographing screen, and the size of the guide frame is determined based on the resolution.
In this kind of possible implementation mode, confirm the guide frame of equidimension not based on different resolutions for the size of guide frame and the resolution phase-match of camera, thereby make to the shooting picture of different resolutions, the guide frame can all show on this shooting picture with suitable size, therefore can promote the suitability and the debugging effect of scheme.
In a possible implementation manner of the first aspect, the steps are as follows: displaying a guide frame on a photographing screen, comprising: and displaying a guide frame in a preset area of the shooting picture. For example, the second area may be set according to the focal area of the camera. Specifically, when the guide frame is located in the middle of the photographed image in the left-right (transverse) dimension, and when the guide frame is located in the bottom of the photographed image in the up-down (longitudinal) dimension, for example, the focusing area (second area) of the camera is the middle of the bottom of the photographed image, and the height is not more than one third of the height of the photographed image, the area is the ideal focusing area of the camera, if the guide frame is arranged in the area, the camera can focus the area where the guide frame is located, and therefore the area where the guide frame is located (i.e., the area where the target is located when the camera works) in the photographed image of the camera is clearer, and the image quality of the target is the best, so that the effect of capturing can be improved.
In a possible implementation manner of the first aspect, the transparency of the second area and the third area are different, and the third area is an area except the second area in the shot picture.
In the possible implementation manner, the second area and the rest third area corresponding to the guide frame in the shooting picture are displayed in a distinguishing mode, so that the guide function of a user is enhanced, and the user experience is improved.
In a possible implementation manner of the first aspect, the steps are as follows: displaying the guide frame on the photographing screen includes: and displaying the outline of the second area on the shooting picture.
In the possible implementation manner, when the second area is displayed on the shooting picture, the guiding frame is displayed by displaying the outline of the second area, so that the guiding function of a user is enhanced, and the user experience is improved.
In a possible implementation manner of the first aspect, the method further includes: and determining a focusing area and/or coding parameters of the camera so that the definition of the second area is higher than that of a third area, wherein the third area is an area except the second area in a shooting picture.
It should be understood that, after the debugging is completed, the second area where the guide frame is located is the area where the snapshot target is located, so in this possible implementation manner, the definition of the area where the snapshot target is located is higher than the definition of the other areas in the shooting picture obtained by the camera when working, and therefore the effect of snapshot can be improved.
In a possible implementation manner of the first aspect, the steps are as follows: after displaying the guide frame on the photographed screen, the method further includes: detecting the coincidence ratio of the first area and the second area; and outputting the coincidence degree.
Because the coincidence degree quantifies the coincidence or matching condition of the first area and the second area, the coincidence condition of the two areas can be accurately reflected, in the possible implementation mode, a user can conveniently know the accurate coincidence condition of the two areas, and can debug according to the change condition of the coincidence degree, so that the debugging efficiency can be further improved, and the user experience can be improved.
Optionally, the contact ratio can be played and output in a voice mode or displayed and output in a text mode, and a user can intuitively feel the current contact ratio in the process of debugging the camera, so that the user is convenient to debug, and the user experience is improved.
In a possible implementation manner of the first aspect, the steps are as follows: after detecting the contact ratio of the first area and the second area, the method further comprises: and under the condition that the contact ratio is not higher than a first threshold value, outputting first prompt information, wherein the first prompt information is used for prompting a user to adjust the shooting angle and/or shooting parameters of the camera.
In the possible implementation manner, the guiding function of the user debugging camera is improved, and the user experience can be improved.
Optionally, the first prompt information can be played and output in a voice mode or displayed and output in a text mode, so that a user can intuitively feel the first prompt information in the process of debugging the camera, the user can conveniently debug, and the user experience is improved.
In a possible implementation manner of the first aspect, in a case that the contact ratio is smaller than the second threshold, the first prompting information is used for prompting a user to adjust a shooting angle of the camera; and under the condition that the contact ratio is larger than or equal to the second threshold value and smaller than the first threshold value, the first prompt is used for prompting the user to adjust the shooting parameters of the camera.
In the possible implementation manner, different prompt messages are sent to the user according to different coincidence degrees, so that the guiding effect on the user debugging camera is further improved, the debugging efficiency can be further improved, and the user debugging experience can be improved.
In a possible implementation manner of the first aspect, the steps are as follows: after detecting the contact ratio of the first area and the second area, the method further comprises: and outputting second prompt information under the condition that the contact ratio is higher than a first threshold value, wherein the second prompt information is used for indicating that the debugging of the camera is completed.
In the possible implementation manner, the guiding function of the user debugging camera can be improved, and the user experience can be improved.
Optionally, the first prompt information can be played and output in a voice mode or displayed and output in a text mode, and a user can intuitively feel the second prompt information in the process of debugging the camera, so that the user is convenient to debug, and the user experience is improved.
In a possible implementation manner of the first aspect, the steps are as follows: displaying a guide frame on a photographing screen, comprising: and displaying a guide frame on the shooting picture in response to a first instruction, wherein the first instruction is used for indicating to enter a debugging mode.
In the possible implementation manner, after the user sends the first instruction for indicating to enter the debug mode, the guide frame is displayed on the shooting picture, so that the feasibility of the scheme is improved.
In a possible implementation manner of the first aspect, the steps are as follows: in response to the first instruction, after displaying the guide frame on the photographing screen, the method further includes: and stopping displaying the guide frame on the shooting picture in response to a second instruction, wherein the second instruction is used for indicating to exit the debugging mode.
In the possible implementation manner, after the user sends the second instruction for indicating to exit the debug mode, the display of the guide frame on the shooting picture is stopped, so that the feasibility of the scheme is improved.
In a possible implementation manner of the first aspect, the position or the angle of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameter.
A second aspect of the present application provides an apparatus for commissioning a camera for performing the method of the first aspect or any possible implementation of the first aspect. In particular, the camera commissioning system comprises means or units for performing the method of the first aspect or any possible implementation of the first aspect, such as: the device comprises an acquisition module, a display module, a determination module and a detection module.
A third aspect of the present application provides an apparatus for debugging a camera, comprising a processor and a memory, the memory for storing a program; the processor is configured to implement the method of the first aspect or any possible implementation of the first aspect by executing a program.
A fourth aspect of the present application provides a computer readable storage medium storing one or more computer executable instructions which, when executed by a processor, perform a method as described above or any one of the possible implementations of the first aspect.
A fifth aspect of the present application provides a computer program product storing one or more computer-executable instructions which, when executed by a processor, perform a method as described above or any one of the possible implementations of the first aspect.
A sixth aspect of the present application provides a chip system comprising at least one processor and an interface for receiving data and/or signals, the at least one processor being adapted to support a computer device for carrying out the functions referred to in the first aspect or any one of the possible implementations of the first aspect. In one possible design, the chip system may further include memory to hold program instructions and data necessary for the computer device. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
It should be understood that the detailed description of the second to sixth aspects and the various implementations thereof above may refer to the detailed description of the first aspect and the various implementations thereof above; moreover, the advantages of the second aspect and the various implementations thereof may be referred to as analyzing the advantages of the first aspect and the various implementations thereof, and will not be described herein.
Drawings
Fig. 1 is a schematic view of a scene of a camera for intelligent target capturing according to an embodiment of the present application;
fig. 2 is another schematic view of a scenario in which a camera according to an embodiment of the present application performs intelligent target capturing;
FIG. 3 is a schematic diagram of an embodiment of a method for debugging a camera according to an embodiment of the present application;
fig. 4 is a schematic view of an embodiment of a shooting picture provided in the embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of a shooting picture provided in an embodiment of the present application;
fig. 6A is a schematic diagram of another embodiment of a shooting screen provided in an embodiment of the present application;
fig. 6B is a schematic diagram of another embodiment of a shooting screen provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of a method for debugging a camera according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an embodiment of display content of a display screen according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an embodiment of a device for debugging a camera according to an embodiment of the present application;
fig. 10 is a schematic view of another embodiment of an apparatus for debugging a camera according to an embodiment of the present application;
fig. 11 is a schematic diagram of another embodiment of a device for debugging a camera according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will now be described with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. As a person of ordinary skill in the art can know, with the development of technology and the appearance of new scenes, the technical solutions provided in the embodiments of the present application are applicable to similar technical problems.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
The embodiment of the application provides a method and related equipment for debugging a camera, which are used for improving the debugging efficiency of the camera and improving the debugging effect of the camera. The embodiment of the application also provides a corresponding computer readable storage medium, a device for debugging the camera and the like. The following will describe in detail.
The application scenario of the embodiment of the present application is illustrated below.
With the rapid development of camera technology, the application of cameras is also becoming more and more widespread, and the intelligent target snapshot can be realized by a camera supporting artificial intelligence (artificial intelligence, AI) capability. Specifically, when the camera works, the monitoring scene is continuously shot (i.e. continuously recorded or continuously shot), the shooting result obtained by shooting the camera is a video containing continuous multi-frame shooting pictures, and a user (such as an installer of the camera or a user) can set a snapshot range on the shooting pictures in advance; when the camera shoots and acquires a shooting picture and performs snapshot target detection on the shooting picture, and when the camera detects a snapshot target in a certain shooting picture and detects that the snapshot target is positioned in a preset snapshot range in the shooting picture, the snapshot target can be automatically snapshot to obtain an image containing the snapshot target, namely, a snapshot picture or a snapshot image.
In order to obtain a better snapshot result, the camera is usually required to be debugged. In the debugging process, a camera is usually enabled to continuously shoot, a shooting picture is obtained, and a shooting angle and shooting parameters of the camera are adjusted by setting a debugging target, so that the debugging target is located in an ideal area in the shooting picture of the camera.
Before commissioning the camera, a user (e.g., a commissioning worker who may be a camera) may explicitly monitor a snap shot target and a snap shot region in the scene.
For example, as shown in fig. 1, assuming that the monitored scene of the camera C is an entrance of a building, specifically, the camera C needs to take a snapshot of a crowd entering the building, then the user may determine that the snapshot target in the monitored scene is a person. And, through analysis, the user finds that the person passes through the area A with a large probability when entering the building, and then the user can take the area A as a snap shot area in the scene.
For example, as shown in fig. 2, assuming that the monitored scene of the camera C is a certain parking lot and that a snapshot of a vehicle entering the parking lot is required, the user can determine that the snapshot target in the monitored scene is a vehicle. Through analysis, the user finds that the vehicle usually stops in front of the barrier gate when entering the parking lot, and the license plate of the vehicle is in the area B with high probability, and then the user can take the area B as a snap shot area in the scene.
After the snapshot target and the snapshot area in the monitored scene are defined, the user can set a debug target consistent with the snapshot target at the snapshot area of the monitored scene. For example, in the scenario shown in fig. 1, a user may have someone as a commissioning target and have that person as a commissioning target in area a. In the scenario shown in fig. 2, the user may have a certain vehicle as a tuning target, and have the license plate of the vehicle as the tuning target located in the area B. In the debugging process, the debugging target can be always located in the pre-defined snapshot area, so that a user can be assisted in completing the debugging of the camera.
It should be understood that the case of the debug target in the shot screen reflects the case of the camera capturing the target in the shot screen later on in operation. Therefore, the angle, the size, the position and the like of the debugging target are respectively in an ideal state by adjusting the shooting angle and the shooting parameters of the camera, so that the angle, the size, the position and the like of the target can be in an ideal state when the camera is in most of the conditions in the follow-up working, and the quality of the target can be ensured.
The present application provides a method of commissioning a camera, which may be performed by a device for commissioning a camera (referred to as a commissioning device). It should be understood that the debugging device may be the camera itself to be debugged, or may be any other device with computing and display functions, for example, may be a computer, a mobile phone or a tablet computer, which is not limited in this embodiment of the present application.
The method for debugging the camera is described by taking the application scenario shown in fig. 1 as an example.
An embodiment of the method for debugging a camera provided in the present application is described below with reference to fig. 3-6B.
As shown in fig. 3, the method for debugging a camera includes, but is not limited to, the following steps:
s301, acquiring a shooting picture obtained by shooting a monitoring scene by a camera.
In the debugging process, the debugging device can continuously acquire a shooting picture obtained by shooting a monitoring scene by the camera in real time, so that a user can complete the debugging of the camera according to the shooting picture. Since the debug target is set at a preset position (i.e. a position in a predetermined snap shot area) of the monitored scene during the debugging process, the shot image includes the debug target. For example, in the scene shown in fig. 1, a person located in the area a is included in the photographed screen.
It should be understood that when the commissioning device is a device other than the camera, the commissioning device may communicate with the camera, receiving a captured image sent by the camera; when the debugging device is the camera, the debugging device can directly shoot the monitoring scene to obtain a shooting picture.
S302, displaying a guide frame on a shooting picture.
The guiding frame is used for guiding a user to adjust the shooting angle and/or shooting parameters of the camera so that the contact ratio of the first area and the second area meets the requirement of the user; the first area is an area where the debugging target is located in the shooting picture, and the second area is an area where the guiding frame is located in the shooting picture.
The shape, size, and position of the guide frame displayed on the photographing screen will be described below, respectively.
For example, the shape of the guide frame displayed on the photographing screen may be determined by a snap shot requirement. Specifically, a plurality of types of guide frames may be provided according to the snapshot requirement, each type of guide frame having a different shape. Then, the type of the guide frame may be determined according to the type of the debug target, and the guide frame of the type may be displayed. For example, a "person" type and a "car" type guide frame may be set in advance, wherein the shape (outline) of the "person" type guide frame may be the whole body of a person or the upper body of a person, and the shape of the "car" type guide frame is a vehicle. In this embodiment, the type of debug target is "people", and thus a guide frame of the "people" type can be displayed.
It should be understood that, according to the actual snapshot requirement, the debug target may also include other types besides the above-mentioned "person" and "car", and accordingly, the guide frame may also be provided with more types, that is, with more shapes of guide frames, which is not limited in this application.
In addition, alternatively, the shape of the guide frame displayed on the photographing screen may be a preset fixed shape, which is not limited in this application.
For example, the size of the guide frame displayed on the photographing screen may be determined according to the resolution of the camera. That is, the size of the guide frame can be adjusted according to the resolution of the camera, so that the guide frame can be made appropriate in size in the photographed pictures of different cameras. It will be appreciated that the size of the guide frame should be larger when the resolution of the camera is higher and smaller when the resolution of the camera is lower. In actual implementation, the corresponding relation between different resolutions and different guide frame sizes (sizes) can be preset, and the guide frames with corresponding sizes can be displayed on a shooting picture according to the resolution of the camera to be debugged.
Optionally, the size of the guide frame displayed on the shooting picture can also be set according to the pixel requirement of the snapshot target, that is, the size of the guide frame can be adjusted according to the pixel requirement of different snapshot targets in an actual scene, so that the guide frame can meet the pixel requirement of the guide frame under different snapshot targets.
Alternatively, the size of the guide frame displayed on the photographing screen may be a preset fixed size, which is not limited in this application.
The position at which the guide frame is displayed on the photographing screen may be a position preset according to actual needs, for example, the position at which the guide frame is displayed may be set according to a focus area of the camera. Specifically, the guide frame is located in the middle of the photographed picture in the left-right (lateral) dimension, and the guide frame is located in the bottom of the photographed picture in the up-down (longitudinal) dimension, for example, the focusing area (second area) of the camera is in the middle of the bottom of the photographed picture and has a height not more than one third of the height of the photographed picture, more specifically, the center line of the longitudinal direction of the photographed picture and the center line of the longitudinal direction of the second area are coincident, and the bottom of the photographed picture and the bottom of the second area are coincident, and the height of the second area in the longitudinal direction is not more than one third of the height of the photographed picture in the longitudinal direction, i.e., the photographed picture is divided into three parts from the lateral direction, the second area is located in the area closest to the bottom, and the middle position in the lateral direction. The region is the most ideal focusing region of the camera, so that the position of the guide frame can be set in the region, the region where the guide frame is located can be focused by the camera, the region where the guide frame is located (namely the region where the snap target is located when the camera works) in a picture shot by the camera can be clearer, the image quality of the snap target is best, and therefore the snap effect can be improved. Such as 20 shown in fig. 4. It should be understood that, since the guide frame corresponds to an area, the display position of the guide frame on the photographed image may be measured in terms of the position of a certain point of the guide frame on the photographed image, for example, a certain point on the outline of the guide frame, or the center point (the intersection point of the horizontal center line and the vertical center line) of the guide frame, which is not limited in this application.
It should be understood that the position of the second area is usually an ideal focusing area of the camera in the whole shooting picture, and the guiding frame is arranged in the area, so that the quality of the shooting target in the shooting picture is higher, and the shooting effect can be improved. It should be noted that, according to different actual situations, the preferred area focused by the camera may also be changed, and accordingly, the specific position of the second area in the shot image may be adjusted according to the actual situations, which is not limited in the embodiment of the present application. Of course, in some implementations, the position of the guide frame displayed on the shooting screen may be other positions preset according to actual requirements, and is irrelevant to the focusing area of the camera, which is not limited in this application. In combination with the above, in a practical example, the debugging device may display an interface for a user to input the type of the debugging target, the resolution of the camera, or the position of the desired guide frame, so that the shape, size, or position of the guide frame may be determined based on the input of the user, respectively, so that the guide frame of the corresponding shape and size is displayed at the corresponding position of the photographed picture.
The display form of the guide frame on the shooting picture is not limited, so long as the user can intuitively identify the guide frame in the shooting picture. Several possible forms of display of the guide frame on the photographed screen are described below.
Mode one: the guide frame may be displayed by way of a contour line, and in particular, a contour line of the guide frame may be displayed on the photographing screen, for example, the second region 20 in fig. 4. The contour lines may be dotted lines or solid lines with different thicknesses, different types and different colors, and the embodiment of the present application is not limited thereto.
Mode two: the guide frame may be displayed by setting the second region (i.e., the region where the guide frame is located) and the third region to different transparency. The third area is an area except the second area in the shooting picture. As shown in fig. 5, the debugging device may set the transparency of the second region 20 and the third region 30 to be different when displaying the guide frame. It should be understood that the specific transparency values of the second region and the third region are not limited in this application, as long as the user can distinguish the second region and the third region by different transparency values. When the second region is not completely transparent, it may also be filled, for example, with solid color, graded or patterned fill, etc. The third region may also be treated similarly, which is not limited in this embodiment. For example, in fig. 5, the debugging device fills the third area with diagonal lines.
Optionally, in the second mode, a guide frame is displayed on the shot frame specifically by overlaying a transparent picture with an alpha channel (alpha channel) to implement different transparency of the second area and the third area, where the alpha channel may be used to adjust translucency of one image, and the camera debugging system overlays the transparent image with the alpha channel on the shot frame as the guide frame.
Alternatively, the first and second modes may be used in combination, that is, the second area and the third area may be set to different transparencies while the guide frame is displayed by means of a contour line, so as to further enhance the guiding effect on the user.
It should be appreciated that during commissioning, the shape, size and position of the guide frame (i.e. the second region) displayed on the camera's capture frame generally remains unchanged (i.e. the content in the capture frame may change, but the guide frame displayed on the capture frame remains unchanged). Thus, the overlapping condition of the first area (i.e., the area where the debug target is located in the photographed image) and the second area is determined by the shape, size, and position of the first area. Wherein the shape and position of the first region are related to the shooting angle of the camera, and the size of the first region is related to the shooting parameters of the camera. Specifically, adjusting the shooting angle of the camera is equivalent to controlling the camera to rotate, so that the range of the camera shooting the monitored scene can be adjusted, and at this time, the shape and the position of the region (i.e., the first region) of the debugging target in the shooting picture can be adjusted. The adjustment of the shooting parameters of the camera is equivalent to controlling the parameters such as the optical multiplying power of the camera lens, so that the distance between the shooting scene and the monitored scene can be adjusted, and the size of the region (namely the first region) of the debugging target in the shooting picture can be adjusted. Thus, the user can increase the overlap ratio of the first area and the second area by continuously adjusting the photographing angle and/or photographing parameters of the camera until the overlap ratio reaches the satisfaction of the user. For example, the user may complete the debugging by seeing that the first area and the second area almost completely coincide. For example, as shown in fig. 6A, it is assumed that the debugging device displays a guide frame on the photographed screen after acquiring the photographed screen, and the user finds that the first area 10 where the debugging target is located and the second area 20 where the guide frame is located do not overlap. Then, after the user adjusts the shooting angle and/or shooting parameters of the camera, as shown in fig. 6B, the first area 10 and the second area 20 almost completely overlap, and at this time, it is assumed that the user considers that the overlapping ratio of the first area 10 and the second area 20 meets the requirement, so that the adjustment of the shooting angle and/or shooting parameters of the camera can be stopped, and the debugging of the camera is completed.
In addition, the advantages of the above scheme of the present application are more apparent in a scene where there is a demand for the pixels of the snap shot object in the shooting picture of the camera. For example, in a face snap scene, it is generally required that the pixels between two ears of a face in a photographed image be 120px; in a vehicle snapshot scene, a long-side pixel of a license plate in a shooting picture is usually required to be 120px, and in these cases, in the existing debugging schemes, a user cannot directly judge whether a pixel value of a snapshot target in the shooting picture meets requirements through naked eyes, so that the user is required to repeatedly and manually adjust the optical magnification of a camera to meet the snapshot requirements. If the above scheme is adopted, the guide frame is displayed on the shooting picture, and the size of the guide frame is set to be the size meeting the snapshot requirement (for example, the width in the horizontal direction is not smaller than 120 px), then the user can make the pixel value of the snapshot target in the shooting picture of the camera meet the requirement as long as the user adjusts the camera so that the debugging target in the shooting picture is basically overlapped with the guide frame.
Since the camera usually focuses based on a preset focusing algorithm, the focusing algorithm usually takes the central area of the shot picture as a focused high weight area, which results in a better picture quality of the central area of the shot picture. However, in actual situations, the second area where the guide frame is located may not be in the central area of the screen. In addition, in other cases, the location where the camera is focused may also be in a background or other area of non-interest to the user. For example, in a low-light environment, in order to obtain a better photographing effect, a camera usually photographs in a large aperture, shallow depth of field manner, and such a photographing manner may cause the camera to focus on the background of a photographed image and in an area not concerned by other users. In view of the above two situations, optionally, in the embodiment provided in the present application, the adjusting device may further adjust a focusing area of the camera, so that the camera uses the second area as the focusing area, so that the sharpness of the second area is higher than that of the third area. It should be understood that when debugging is completed, the second area where the guide frame is located is the area where the snap shot target is located, so in this way, the definition of the area where the snap shot target is located in the shooting picture obtained by the camera during working can be higher, thus the effect of snap shot can be improved, and the accuracy of subsequent intelligent analysis on the snap shot target can be improved.
Optionally, the user may also manually confirm the focusing area, after the user considers that the overlapping ratio of the first area and the overlapping ratio of the second area meet the requirement, the user triggers the camera to focus, or the debugging device automatically triggers the camera to focus.
In the implementation process, when the debugging device is other equipment different from the camera, the debugging device can also send the determined focusing area to the camera, so that the camera can take a snapshot according to the focusing area when working, and a snapshot picture with a clear snapshot target is obtained.
Optionally, after the focal region is determined, the user may determine, through the debugging device, a capturing range of the camera in the capturing image, where the default capturing range of the camera is the whole capturing image, but in this way, when the capturing target is not in the focal region, the camera also captures, and the quality of the obtained capturing result may be relatively poor, so the user may draw or adjust, through the debugging device, the capturing range of the camera in the capturing image, for example, the user sets the capturing range to be the closest bottom third area in the capturing image, and, for example, the user sets the capturing range to be the second area, and when the capturing target is in the capturing range in the capturing image, the camera captures, and thus the quality of the obtained capturing result may be significantly improved.
In addition, since the camera is in operation, the shot picture may need to be compressed and then sent to other devices, for example, the camera may encode the shot picture in a JPEG (joint photographic experts group) manner or use standard encoding such as h.264 and then send to a server for storage. Therefore, as an alternative, the debugging device may also determine the encoding parameters of the camera, so that when the camera encodes the photographed image in the subsequent working process, the region where the guide frame is located (i.e., the second region) is used as the encoded region of interest (region of interest, ROI), so that the sharpness of the second region is higher than the sharpness of the other regions in the encoded corresponding image. In this implementation, the coding parameter may be indication information of the encoded ROI area; and when the debugging device is other devices different from the camera, the debugging device can also send the coding parameters to the camera, so that the camera can code according to the coding parameters when in operation. It is understood that by the mode, the definition of the area where the snapshot target is located is higher, so that the snapshot effect can be improved, and the accuracy of subsequent intelligent analysis on the snapshot target can be improved.
Another embodiment of the method for commissioning a camera provided in the present application is described below in connection with fig. 7-8.
As shown in fig. 7, the method for debugging a camera includes, but is not limited to, the following steps:
s701, acquiring a shooting picture obtained by shooting a monitoring scene by a camera.
Please refer to the related description in S301, and the description is omitted.
S702, in response to a first instruction, displaying a guide frame on a shooting picture.
As shown in fig. 8, after the shot screen is acquired, a guide frame may be displayed on the shot screen after the user triggers the debug mode. The debugging device comprises a display screen, wherein a debugging interface and a shooting picture frame can be displayed in the display screen, the shooting picture frame is arranged on the left side of the display screen, the debugging interface is arranged on the right side of the display screen, and when the debugging device acquires the shooting picture, the shooting picture is displayed in the shooting picture frame. The user clicks a key for starting debugging in the debugging interface, which is equivalent to the fact that the user sends out a first instruction. The debugging device may start displaying the guide frame on the photographed screen in response to a first instruction from the user to enter the debugging mode. The manner of displaying the guide frame on the shooting frame may refer to the description of S302, and will not be repeated.
It should be noted that, there are various ways and forms of displaying the debug interface and the shot image on the display screen, for example, the debug interface is disposed above the display screen, the shot image frame is disposed below the display screen, for example, after the user clicks a key (virtual key) for starting to debug in the debug interface, the shot image is displayed in the shot image frame, and simultaneously the guide frame is displayed, for example, after the user clicks the key for starting to debug in the debug interface, the shot image frame starts to display in full screen, and the user can exit through a return key on the display screen to redisplay the adjust interface.
S703, detecting the contact ratio of the first area and the second area, and outputting the contact ratio.
After the guide frame is displayed on the photographing screen, the user can adjust the photographing angle and/or photographing parameters of the camera. Specifically, the user can adjust the shooting angle and shooting parameters of the camera, for example, the camera is moved to adjust the shooting angle, the adjusting key of the camera is controlled to adjust the shooting parameters, in order to be more convenient for the user to debug, the user can also input the shooting angle and shooting parameters which the camera needs to adjust in the debugging device, the debugging device transmits the data to the camera, and the camera automatically adjusts the shooting angle and the shooting parameters according to the data. In the process, the debugging device can monitor the contact ratio of the first area and the second area in real time and output the contact ratio.
Specifically, the debugging device may identify the first region. Then, illustratively, the degree of coincidence of the first region and the second region may be determined from the area ratio of the first region in the second region and the area ratio of the second region in the first region, and the degree of coincidence may be output in real time based on this. Because the coincidence degree quantifies the coincidence or matching condition of the first area and the second area, the coincidence condition of the two areas can be accurately reflected, in the possible implementation mode, a user can conveniently know the accurate coincidence condition of the two areas, and can debug according to the change condition of the coincidence degree, so that the debugging efficiency can be further improved, and the user experience can be improved.
Note that, the overlap ratio of the first area and the second area may also be calculated by other manners, which is not limited in this application.
Optionally, the debugging device is further provided with a sound, the contact ratio can be played and output in a voice mode, or can be displayed and output in a text mode, or can be output simultaneously in the two modes, a user can intuitively feel the current contact ratio in the process of debugging the camera, the debugging of the user is facilitated, and the user experience is improved.
When the contact ratio is output in the form of text, the contact ratio may be displayed on a shot screen, or may be displayed on an interface other than the shot screen, for example, displayed above the second area on the shot screen, or displayed in a debug interface, where the contact ratio may be displayed in text of various fonts, colors, or sizes, or the percentage represented by the contact ratio may be displayed by a bar graph.
And S704, outputting first prompt information under the condition that the contact ratio is not higher than a first threshold value.
When the contact ratio is not higher than a first threshold value, the difference between the debugging target and the guide frame is obvious, the user is required to adjust the shooting angle and/or shooting parameters of the camera, and at the moment, first prompt information is output to prompt the user to adjust the shooting angle and/or shooting parameters of the camera, so that the guiding effect on the user for debugging the camera is improved, and the user experience is improved.
Optionally, the first prompt information can be played and output in a voice mode or displayed and output in a text mode, so that a user can intuitively feel the first prompt information in the process of debugging the camera, the user can conveniently debug, and the user experience is improved.
Optionally, under the condition that the contact ratio is smaller than the first threshold, the debugging device can further give a more detailed and targeted prompt to the user. Specifically, a second threshold value (the second threshold value is smaller than the first threshold value) may be set, and in the case where the overlap ratio is smaller than the second threshold value, the first prompting information may be information for prompting the user to adjust the shooting angle of the camera; and in the case where the overlap ratio is greater than the second threshold value and less than the first threshold value, the first prompt may be information for prompting the user to adjust the photographing parameters of the camera. The first threshold value and the second threshold value may be preset values, which are not limited in this application.
For example, assuming that the first threshold is 90% and the second threshold is 70%, when the user starts to debug the camera, the debug target may be farther from the guide frame, and thus the overlap of the first region and the second region is detected by the debug apparatus to be less than 70%. At this time, the debugging device may output first prompt information prompting the user to adjust the shooting angle of the camera. For example, the "adjust camera shooting angle to make the position of the debug target and the guide frame close" is displayed in the debug interface, and the "adjust camera shooting angle to make the position of the debug target and the guide frame close" is played at the same time.
Then, when the user adjusts the shooting angle of the camera, the debugging device correspondingly detects that the coincidence ratio of the first area and the second area is greater than or equal to 70% but less than or equal to 90%, the position and the shape of the debugging target and the guide frame in the shooting picture may be relatively close, but the size of the debugging target may be different from that of the guide frame. In this case, the debugging means may output first prompt information prompting the user to adjust the photographing parameters of the camera. For example, the "please adjust the shooting parameters of the camera so that the debugging target coincides with the guide frame" is displayed in the debugging interface, and the "please adjust the shooting parameters of the camera so that the debugging target coincides with the guide frame" is played at the same time. At this time, after the user sees the prompt, the optical magnification of the camera lens can be adjusted, so that the size of the debugging target is more matched with the size of the guide frame. It should be understood that, further, when the debugging device detects that the overlap ratio of the first area and the second area is higher than 90%, step S706 is performed.
And S705, outputting second prompt information when the contact ratio is higher than the first threshold value.
Optionally, the second prompt information can be played and output in a voice mode or displayed and output in a text mode, so that a user can intuitively feel the second prompt information in the process of debugging the camera, the user can conveniently debug, and the user experience is improved.
When the overlap ratio is higher than a first threshold value, the overlap condition of the debugging target and the guide frame is usually indicated to meet the expected requirement, so that at the moment, the debugging device can output second prompt information, for example, the debugging is finished in a debugging interface, and simultaneously, the voice of the debugging is finished is played, the debugging of the camera is indicated to be finished through the second prompt information, the guiding function of a user for debugging the camera is improved, and the user experience is improved.
S706, in response to the second instruction, the display of the guide frame on the photographing screen is stopped.
When the debugging device outputs the second prompt information, the user can observe the relation between the debugging target and the guide frame in the shooting picture, and when the user considers that the current debugging target just coincides with the guide frame or meets the requirement of the current debugging target, the debugging can be determined to stop in the debugging device, for example, after the debugging device outputs the second prompt information, the user clicks a key for stopping the debugging in a debugging interface, at the moment, the user is equivalent to sending a second instruction, and the debugging device responds to the second instruction for indicating to exit the debugging mode, stops displaying the guide frame on the shooting picture, and completes the debugging of the camera. When the debugging device outputs the second prompt information, the user can continue to debug according to the observed condition until the requirement is met, and the repeated description is omitted.
It should be noted that, the user may click the key for stopping debugging under any circumstance, so that the debugging device exits the debugging module, and at this time, the debugging device may control the camera to save the current shooting parameters and shooting angles, so that when the user subsequently returns to the debugging mode, the debugging operation can be continued for the last time.
In the embodiment of the application, the debugging device detects the coincidence ratio of the first area corresponding to the debugging target and the second area corresponding to the guiding frame, and can output the coincidence ratio in various modes; in addition, the debugging device outputs different prompt messages according to different coincidence degrees, so compared with the embodiment shown in fig. 3, the embodiment can further enhance the guiding effect on user debugging, further improve the efficiency of the user for debugging the camera and improve the user experience.
It should be understood that, similar to the embodiment shown in fig. 3, in the embodiment shown in fig. 7, the debugging device may also determine the focusing area and/or the encoding parameters of the camera, so that the area where the guide frame is located is clearer in the picture shot by the camera and the picture after encoding, thereby improving the quality of the snapshot, and not repeated.
Similarly to the embodiment shown in fig. 3, the embodiment shown in fig. 7 may also be applied to a scene where multiple snapshot targets exist, which is not described again.
The embodiment of the present application further provides a device for debugging a camera (referred to as a debugging device for short) capable of implementing the method for debugging a camera, and the device is described below with reference to fig. 9-11.
Referring to fig. 9, a schematic structural diagram of a debugging device 900 provided in the present application, where the debugging device 900 includes:
the acquisition module 901 is configured to acquire a shooting picture obtained by shooting a monitoring scene with a camera, where the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the acquisition unit 901 may perform step S301 or step S701 in the above-described method of debugging a video camera.
The display module 902 is configured to display a guide frame on a shot picture, where the guide frame is configured to guide a user to adjust a shooting angle and/or shooting parameters of the camera, so that a contact ratio between a first area and a second area meets a requirement of the user, the first area is an area where a debugging target is located in the shot picture, and the second area is an area where the guide frame is located in the shot picture. The display module 902 may perform step S302 in the above-described method embodiment. The display module 902 may correspond to the display screen in the above method embodiment.
In this embodiment of the present application, the display module 902 displays a guide frame on a shooting picture of the camera, where the guide frame is used for guiding a user to adjust a shooting angle and/or shooting parameters of the camera, so that a debugging target is located in the guide frame as much as possible, thereby realizing the debugging of the camera with low cost under the condition of ensuring the debugging efficiency and the shooting effect.
Referring to fig. 10, a schematic structural diagram of a debugging device 1000 provided in the present application, the debugging device 1000 includes:
the acquiring module 1001 is configured to acquire a shooting picture obtained by shooting a monitoring scene with a camera, where the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the acquisition module 1001 may perform step S301 or step S701 in the above-described method embodiment.
The display module 1002 is configured to display a guide frame on a shooting picture, where the guide frame is configured to guide a user to adjust a shooting angle and/or shooting parameters of a camera, so that a contact ratio between a first area and a second area meets a requirement of the user, the first area is an area where a debugging target is located in the shooting picture, and the second area is an area where the guide frame is located in the shooting picture. The display module 1002 may perform step S302 or step S702 in the above-described method embodiments. The display module 1002 may correspond to the display screen in the above-described method embodiment.
Optionally, the obtaining module 1001 is further configured to obtain a target type of the debug target; the display module 1002 is specifically configured to display a guide frame on a photographed screen, where a type of the guide frame is determined based on a target type.
Optionally, the obtaining module 1001 is further configured to obtain a resolution of the camera; the display module 1002 is specifically further configured to display a guide frame on the photographed image, where the size of the guide frame is determined based on the resolution.
Optionally, the transparency of the second area is different from that of a third area, and the third area is an area except the second area in the photographed image.
Optionally, the display module 1002 is specifically configured to display an outline of the second area on the shot screen.
Optionally, the camera debug system 1000 further includes: a determining module 1003, configured to determine a focusing area and/or an encoding parameter of the camera so that the sharpness of the second area is higher than the sharpness of a third area, where the third area is an area other than the second area in the captured image.
Optionally, the camera debug system 1000 further includes: a detection module 1004, configured to detect a coincidence ratio of the first area and the second area; the detection module 1004 may perform step S703 in the above-described method embodiment. The display module 1002 is further configured to output the degree of overlap. The display module 1002 may perform step S703 in the above-described method embodiment. The display module 1002 may correspond to the display screen and the sound in the above-described method embodiment.
Optionally, the display module 1002 is further configured to output a first prompt message when the contact ratio is not higher than the first threshold, where the first prompt message is used to prompt a user to adjust a shooting angle and/or a shooting parameter of the camera. The display module 1002 may perform step S704 in the above-described method embodiment. The display module 1002 may correspond to the display screen and the sound in the above-described method embodiment.
Optionally, the display module 1002 is further configured to output a second prompt message when the contact ratio is higher than the first threshold, where the second prompt message is used to indicate that debugging of the camera is completed. The display module 1002 may perform step S705 in the method embodiment described above. The display module 1002 may correspond to the display screen and the sound in the above-described method embodiment.
Optionally, the display module 1002 is specifically further configured to display a guide frame on the shot screen in response to a first instruction, where the first instruction is used to instruct to enter the debug mode. The display module 1002 may perform step S702 in the above-described method embodiment.
Optionally, the display module 1002 is specifically further configured to stop displaying the guide frame on the shot screen in response to a second instruction, where the second instruction is used to instruct to exit from the debug mode. The display module 1002 may perform step S706 in the above-described method embodiments.
Optionally, the position or shape of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameter.
Alternatively, the target type is a person or a car.
It should be understood that the specific implementation details and the corresponding beneficial effects of the above-mentioned debugging device 900 or the debugging device 1000 may be understood by referring to the corresponding content in the foregoing method for debugging a camera, and the detailed description is not repeated here.
Fig. 11 is a schematic structural diagram of a debugging device 1100 provided in the present application. The debugging device 1100 includes: the processor 1101, communication interface 1102, memory 1103 and bus 1104, the processor 1101 may comprise a CPU or at least one of a CPU with a GPU and NPU and other types of processors. The processor 1101, the communication interface 1102, and the memory 1103 are connected to each other through a bus 1104. In embodiments of the present application, the processor 1101 is configured to control and manage actions of the computer device 1100, e.g., the processor 1101 is configured to perform the above-described method of commissioning a camera and/or other processes for the techniques described herein. The communication interface 1102 is used to support communication by the computer device 1100. A memory 1103 for storing program codes and data for the computer device 110.
The processor 1101 may be a central processor unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs a computational function, such as a combination comprising one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so forth. Bus 1104 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
The present application also provides a computer-readable storage medium having stored therein computer-executable instructions that, when executed by at least one processor of a device, perform the method of commissioning a camera described in the above embodiments.
The present application also provides a computer program product comprising computer-executable instructions stored in a computer-readable storage medium; the at least one processor of the device may read the computer-executable instructions from the computer-readable storage medium, the at least one processor executing the computer-executable instructions causing the device to perform the method of commissioning a camera as described in the above embodiments.
The present application also provides a chip system comprising at least one processor for receiving data and/or signals and an interface for supporting a method of commissioning a camera implementing the above described embodiments. In one possible design, the chip system may further include memory to hold program instructions and data necessary for the computer device. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (30)

1. A method of commissioning a camera, comprising:
acquiring a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture comprises a debugging target, and the debugging target is a shooting target which is arranged in the monitoring scene and is used for debugging the camera;
And displaying a guide frame on the shooting picture, wherein the guide frame is used for guiding a user to adjust the shooting angle and/or shooting parameters of the camera so that the contact ratio of a first area and a second area meets the requirements of the user, the first area is an area where the debugging target is located in the shooting picture, and the second area is an area where the guide frame is located in the shooting picture.
2. The method of claim 1, wherein displaying a guide frame on the captured picture comprises:
obtaining a target type of the debugging target;
and displaying a guide frame on the shooting picture, wherein the type of the guide frame is determined based on the target type.
3. The method according to claim 1 or 2, wherein the displaying a guide frame on the photographed screen includes:
acquiring the resolution of the camera;
and displaying a guide frame on the shooting picture, wherein the size of the guide frame is determined based on the resolution.
4. A method according to any one of claims 1 to 3, wherein the transparency of the second region and the third region are different, and the third region is a region other than the second region in the photographed picture.
5. The method according to any one of claims 1 to 4, wherein displaying a guide frame on the photographed screen includes:
and displaying the outline of the second area on the shooting picture.
6. The method according to any one of claims 1-5, further comprising:
and determining a focusing area and/or coding parameters of the camera so that the definition of the second area is higher than that of a third area, wherein the third area is an area except the second area in the shooting picture.
7. The method according to any one of claims 1 to 6, wherein after the displaying of the guide frame on the photographed screen, the method further comprises:
detecting the coincidence ratio of the first area and the second area;
and outputting the contact ratio.
8. The method of claim 7, wherein the outputting the overlap ratio comprises:
and outputting the overlap ratio in a voice playing and/or text display mode.
9. The method of claim 7 or 8, wherein after detecting the overlap of the first region and the second region, the method further comprises:
And under the condition that the contact ratio is not higher than the first threshold value, outputting first prompt information, wherein the first prompt information is used for prompting the user to adjust the shooting angle and/or shooting parameters of the camera.
10. The method according to any one of claims 7-9, wherein after said detecting the overlap of the first region and the second region, the method further comprises:
and outputting second prompt information under the condition that the contact ratio is higher than the first threshold value, wherein the second prompt information is used for indicating that the debugging of the camera is completed.
11. The method according to any one of claims 1 to 10, wherein displaying a guide frame on the photographed screen includes:
and displaying a guide frame on the shooting picture in response to a first instruction, wherein the first instruction is used for indicating to enter a debugging mode.
12. The method of claim 11, wherein, in response to the first instruction, after displaying a guide frame on the captured picture, the method further comprises:
and stopping displaying the guide frame on the shooting picture in response to a second instruction, wherein the second instruction is used for indicating to exit the debugging mode.
13. The method according to any one of claims 1-12, wherein the position or shape of the first region is related to the shooting angle and the size of the first region is related to the shooting parameter.
14. The method of any one of claims 2-13, wherein the target type is a person or a vehicle.
15. An apparatus for commissioning a camera, comprising:
the acquisition module is used for acquiring a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture comprises a debugging target which is arranged in the monitoring scene and used for debugging the shooting target of the camera;
the display module is used for displaying a guide frame on the shooting picture, wherein the guide frame is used for guiding a user to adjust the shooting angle and/or shooting parameters of the video camera so that the contact ratio of a first area and a second area meets the requirements of the user, the first area is an area where the debugging target is located in the shooting picture, and the second area is an area where the guide frame is located in the shooting picture.
16. The apparatus of claim 15, wherein the acquisition module is further configured to acquire a target type of the debug target;
And when the display module displays a guide frame on the shooting picture, the type of the guide frame is determined based on the target type.
17. The apparatus of claim 15 or 16, wherein the acquisition module is further configured to acquire a resolution of the camera;
and when the display module displays a guide frame on the shooting picture, the size of the guide frame is determined based on the resolution.
18. The apparatus according to any one of claims 15 to 17, wherein the transparency of the second area and the third area are different, and the third area is an area other than the second area in the photographed picture.
19. The apparatus according to any one of claims 15-18, wherein the display module is specifically configured to display an outline of the second area on the photographed picture.
20. The apparatus according to any one of claims 15-19, further comprising:
and the determining module is used for determining the focusing area and/or the coding parameter of the camera so that the definition of the second area is higher than that of a third area, wherein the third area is an area except the second area in the shooting picture.
21. The apparatus according to any one of claims 15-20, further comprising:
the detection module is used for detecting the coincidence ratio of the first area and the second area;
the display module is also used for outputting the contact ratio.
22. The device according to claim 21, wherein the display module is specifically configured to output the overlap ratio in the form of voice playing and/or text display.
23. The device according to claim 20 or 21, wherein the display module is further configured to output a first prompt message, where the contact ratio is not higher than the first threshold value, and the first prompt message is configured to prompt the user to adjust a shooting angle and/or a shooting parameter of the camera.
24. The apparatus according to any one of claims 20-22, wherein the display module is further configured to output a second hint information if the overlap ratio is higher than the first threshold, the second hint information being used to indicate that debugging of the camera has been completed.
25. The apparatus according to any one of claims 15-24, wherein the display module is further specifically configured to display a guide frame on the captured picture in response to a first instruction, the first instruction being configured to instruct to enter a debug mode.
26. The apparatus of claim 25, wherein the display module is further configured to stop displaying the guide frame on the captured image in response to a second instruction, the second instruction being configured to instruct exiting the debug mode.
27. The apparatus of any one of claims 15-26, wherein a position or shape of the first region is related to the photographing angle and a size of the first region is related to the photographing parameter.
28. The apparatus of any one of claims 16-27, wherein the target type is a person or a vehicle.
29. An apparatus for commissioning a camera, comprising:
a processor and a memory for storing a program;
the processor is configured to implement the method according to any one of claims 1-14 by executing the program.
30. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-14.
CN202210312374.1A 2022-01-11 2022-03-28 Method for debugging camera and related equipment Pending CN116471477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/120798 WO2023134215A1 (en) 2022-01-11 2022-09-23 Method for debugging camera, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210028167 2022-01-11
CN2022100281673 2022-01-11

Publications (1)

Publication Number Publication Date
CN116471477A true CN116471477A (en) 2023-07-21

Family

ID=87183013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312374.1A Pending CN116471477A (en) 2022-01-11 2022-03-28 Method for debugging camera and related equipment

Country Status (2)

Country Link
CN (1) CN116471477A (en)
WO (1) WO2023134215A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117835044A (en) * 2024-03-06 2024-04-05 凌云光技术股份有限公司 Debugging method and device of motion capture camera
CN117835044B (en) * 2024-03-06 2024-06-28 凌云光技术股份有限公司 Debugging method and device of motion capture camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201023633A (en) * 2008-12-05 2010-06-16 Altek Corp An image capturing device for automatically position indicating and the automatic position indicating method thereof
KR20100081049A (en) * 2009-01-05 2010-07-14 삼성전자주식회사 Apparatus and method for photographing image in portable terminal
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
CN109344715A (en) * 2018-08-31 2019-02-15 北京达佳互联信息技术有限公司 Intelligent composition control method, device, electronic equipment and storage medium
CN111277759B (en) * 2020-02-27 2021-08-31 Oppo广东移动通信有限公司 Composition prompting method and device, storage medium and electronic equipment
CN111757098B (en) * 2020-06-30 2022-08-05 北京百度网讯科技有限公司 Debugging method and device after installation of intelligent face monitoring camera, camera and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117835044A (en) * 2024-03-06 2024-04-05 凌云光技术股份有限公司 Debugging method and device of motion capture camera
CN117835044B (en) * 2024-03-06 2024-06-28 凌云光技术股份有限公司 Debugging method and device of motion capture camera

Also Published As

Publication number Publication date
WO2023134215A1 (en) 2023-07-20

Similar Documents

Publication Publication Date Title
EP3154270B1 (en) Method and device for adjusting and displaying an image
US11368632B2 (en) Method and apparatus for processing video, and storage medium
CN106651955B (en) Method and device for positioning target object in picture
CN105095881B (en) Face recognition method, face recognition device and terminal
KR101560866B1 (en) Viewpoint detector based on skin color area and face area
CN107911682B (en) Image white balance processing method, device, storage medium and electronic equipment
CN113408403A (en) Living body detection method, living body detection device, and computer-readable storage medium
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN109582122B (en) Augmented reality information providing method and device and electronic equipment
TW201503054A (en) Image processor and image merging method thereof
CN111866437B (en) Automatic switching method and device for double cameras of video conference, terminal equipment and storage medium
US20130308829A1 (en) Still image extraction apparatus
KR20120054746A (en) Method and apparatus for generating three dimensional image in portable communication system
CN108289170B (en) Photographing apparatus, method and computer readable medium capable of detecting measurement area
CN110933314B (en) Focus-following shooting method and related product
CN110086921B (en) Method and device for detecting performance state of terminal, portable terminal and storage medium
CN109788199B (en) Focusing method suitable for terminal with double cameras
CN116471477A (en) Method for debugging camera and related equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN112153291B (en) Photographing method and electronic equipment
CN111654620B (en) Shooting method and device
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device
CN111866383A (en) Image processing method, terminal and storage medium
CN107948522B (en) Method, device, terminal and storage medium for selecting shot person head portrait by camera
WO2021109458A1 (en) Object recognition method and apparatus, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication