WO2023134215A1 - Method for debugging camera, and related device - Google Patents

Method for debugging camera, and related device Download PDF

Info

Publication number
WO2023134215A1
WO2023134215A1 PCT/CN2022/120798 CN2022120798W WO2023134215A1 WO 2023134215 A1 WO2023134215 A1 WO 2023134215A1 CN 2022120798 W CN2022120798 W CN 2022120798W WO 2023134215 A1 WO2023134215 A1 WO 2023134215A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
shooting
camera
debugging
guide frame
Prior art date
Application number
PCT/CN2022/120798
Other languages
French (fr)
Chinese (zh)
Inventor
吴志豪
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023134215A1 publication Critical patent/WO2023134215A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the embodiments of the present application relate to the field of cameras, and in particular, to a method for debugging a camera and related equipment.
  • AI artificial intelligence
  • the user Before the camera works, the user (for example, a camera installer) can first install the camera at a suitable position, so that the camera can capture a preset monitoring scene. Then, the user needs to debug the camera so that the camera can capture objects (for example, people or vehicles) in the monitoring scene with higher quality when the camera is working later.
  • the debugging target as the shooting target is usually preset in the monitoring scene, and then the user repeatedly adjusts the shooting angle and shooting parameters of the camera based on the shooting pictures of the camera and manual experience, so that the debugging target The position, size and angle of the target in the shooting picture of the camera meet the requirements.
  • Embodiments of the present application provide a method for debugging a camera and related equipment, which are used to improve the debugging efficiency of the camera and improve the debugging effect of the camera.
  • the embodiment of the present application also provides a corresponding computer-readable storage medium, a device for debugging a camera, and the like.
  • the first aspect of the present application provides a method for debugging a camera, including: obtaining a shooting picture obtained by shooting a monitoring scene with the camera, the shooting picture includes a debugging target, and the debugging target is set in the monitoring scene and is used for debugging the camera
  • the shooting target ; display the guide frame on the shooting screen, the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the overlap between the first area and the second area meets the user's needs, the first area is The area where the debugging target is located in the shooting screen, and the second area is the area where the guide frame is located in the shooting screen.
  • the camera When the camera is being debugged, it continuously obtains the shooting images, so that the user can debug the camera based on the shooting images and the guide frame in the shooting images.
  • the area where the above-mentioned guide frame is located can be understood as the ideal or expected area in the shooting picture captured by the camera when it is working. Therefore, the higher the coincidence degree between the first area and the second area above, the better the shooting angle and shooting parameters of the camera are. The closer to the ideal angle and parameters.
  • the user can intuitively see the comparison between the debugging target and the guide frame in the shooting screen during the debugging process, thus guiding
  • the user quickly adjusts the shooting angle and/or shooting parameters of the camera, so that the debugging target and the guide frame in the obtained shooting picture overlap as much as possible, thereby improving the debugging efficiency of the camera.
  • the standard is unified, so the problem that the debugging effect of the camera cannot be guaranteed due to the uneven manual experience of different users can be avoided.
  • displaying the guide frame on the shooting screen includes: acquiring a target type of the debugging target, and displaying the guide frame on the shooting screen, where the type of the guide frame is determined based on the target type.
  • the debugging target is usually set based on the actual capture requirements, for example, if you want to capture a person in the monitoring scene, the debugging target is a person (at this time, the target type is a person), if you want to If the vehicle in the monitoring scene is captured, the debugging target is a vehicle (at this time, the target type is a vehicle). Therefore, in this possible implementation, different types of guide frames can be determined based on different target types, and different types of guide frames have different shapes, thus making the outline of the guide frame more closely match the outline of the actual captured target , that is, the ideal or expected area is made more accurate, and thus the debugging effect can be further improved.
  • the above step of: displaying the guide frame on the shooting screen includes: obtaining the resolution of the camera; displaying the guide frame on the shooting screen, and the size of the guide frame is determined based on the resolution.
  • guide frames of different sizes are determined based on different resolutions, so that the size of the guide frame matches the resolution of the camera, so that the guide frame can be used for shooting pictures with different resolutions.
  • the size of is displayed on the shooting screen, so the applicability and debugging effect of the solution can be improved.
  • the above step of: displaying the guide frame on the shooting screen includes: displaying the guide frame in a preset area of the shooting screen.
  • the second area may be set according to the focus area of the camera.
  • the guide frame is located in the middle of the shooting picture, and from the up-down (vertical) dimension, the guide frame is located at the bottom of the shooting picture, for example, the focus area (second area) of the camera is In the middle of the bottom of the shooting screen, and the height is not greater than one-third of the height of the shooting screen, this area is an ideal focus area for the camera.
  • the camera can see the area where the guide frame is located. Focusing can further make the area where the guide frame is located (that is, the area where the capture target is located when the camera is working) clearer in the picture captured by the camera, and the image quality of the capture target is the best, so the capture effect can be improved.
  • the transparency of the second area and the third area are different, and the third area is an area in the shooting frame other than the second area.
  • the second area corresponding to the guide frame and the remaining third area in the shooting screen are displayed separately, which enhances the guidance function for the user and improves the user experience.
  • the above step of: displaying the guide frame on the shooting screen includes: displaying an outline of the second region on the shooting screen.
  • the method further includes: determining the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than the definition of the third area, and the third area is Capture the area of the frame other than the second area.
  • the second area where the guide frame is located is the area where the captured target is located. Therefore, in this possible implementation mode, in the shooting picture obtained by the camera during operation, the clarity of the area where the captured target is located The sharpness is higher than that of other areas, which can improve the effect of capture.
  • the above step after displaying the guide frame on the shooting screen, the method further includes: detecting a degree of overlap between the first area and the second area; and outputting the degree of overlap.
  • the degree of overlap quantifies the overlap or matching of the first area and the second area, it can reflect the overlap of the two areas more accurately. Therefore, in this possible implementation, the user can conveniently know the The precise coincidence of the two regions allows the user to debug according to the change of the coincidence, thus further improving the efficiency of debugging and improving the user experience.
  • the coincidence degree can be played and output in the form of voice, or displayed in the form of text, and the user can intuitively feel the current coincidence degree in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
  • the above step after detecting the degree of overlap between the first area and the second area, the method further includes: when the degree of overlap is not higher than the first threshold, outputting the first Prompt information, the first prompt information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera.
  • improving the guiding function for the user to debug the camera can improve the user's experience.
  • the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the first prompt information during the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
  • the first prompt information when the coincidence degree is less than the second threshold, the first prompt information is used to prompt the user to adjust the shooting angle of the camera; In the case of a threshold value, the first prompt is used to prompt the user to adjust the shooting parameters of the camera.
  • different prompt information is sent to the user according to different overlapping degrees, which further improves the guiding function for the user to debug the camera, thereby further improving the debugging efficiency and the user debugging experience.
  • the above step after detecting the degree of overlap between the first area and the second area, the method further includes: outputting a second prompt when the degree of overlap is higher than the first threshold information, and the second prompt information is used to indicate that the debugging of the camera has been completed.
  • the guiding function for the user to debug the camera can also be improved, and the user experience can be improved.
  • the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the second prompt information in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
  • the above step of: displaying the guide frame on the shooting screen includes: displaying the guide frame on the shooting screen in response to a first instruction, the first instruction being used to indicate entering the debugging mode.
  • a guide frame is displayed on the shooting screen, which improves the feasibility of the solution.
  • the above steps after displaying the guide frame on the shooting screen in response to the first instruction, the method further includes: in response to the second instruction, stop displaying the guide frame on the shooting screen, The second command is used to indicate to exit the debug mode.
  • the position or angle of the first area is related to a shooting angle
  • the size of the first area is related to a shooting parameter
  • the second aspect of the present application provides an apparatus for debugging a camera, which is used to execute the method in the above first aspect or any possible implementation manner of the first aspect.
  • the camera debugging system includes modules or units for executing the method in the first aspect or any possible implementation manner of the first aspect, such as: an acquisition module, a display module, a determination module, and a detection module.
  • the third aspect of the present application provides a device for debugging a camera, which includes a processor and a memory, and the memory is used to store a program; the processor is used to implement the method in the first aspect or any possible implementation of the first aspect by executing the program .
  • the fourth aspect of the present application provides a computer-readable storage medium that stores one or more computer-executable instructions.
  • the processor executes any one of the above-mentioned first aspect or the first aspect. method of implementation.
  • the fifth aspect of the present application provides a computer program product that stores one or more computer-executable instructions.
  • the processor executes any one of the above-mentioned first aspect or any possible implementation of the first aspect. Methods.
  • the sixth aspect of the present application provides a chip system, the chip system includes at least one processor and an interface, the interface is used to receive data and/or signals, and the at least one processor is used to support the computer device to implement the above first aspect or the first Functions involved in any possible implementation of the aspect.
  • the system-on-a-chip may further include a memory, and the memory is used for storing necessary program instructions and data of the computer device.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of a scene where a camera provided by an embodiment of the present application performs smart target capture
  • FIG. 2 is a schematic diagram of another scene where the camera provided by the embodiment of the present application performs smart target capture
  • FIG. 3 is a schematic diagram of an embodiment of a method for debugging a camera provided in an embodiment of the present application
  • FIG. 4 is a schematic diagram of an embodiment of a shooting screen provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application.
  • FIG. 6A is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application.
  • FIG. 6B is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of another embodiment of a method for debugging a camera provided in an embodiment of the present application.
  • Fig. 8 is a schematic diagram of an embodiment of the content displayed on the display screen provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an embodiment of a device for debugging a camera provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another embodiment of a device for debugging a camera provided in an embodiment of the present application.
  • Fig. 11 is a schematic diagram of another embodiment of the device for debugging a camera provided by the embodiment of the present application.
  • Embodiments of the present application provide a method for debugging a camera and related equipment, which are used to improve the debugging efficiency of the camera and improve the debugging effect of the camera.
  • the embodiment of the present application also provides a corresponding computer-readable storage medium, a device for debugging a camera, and the like. Each will be described in detail below.
  • the camera when the camera is working, it continuously shoots the monitoring scene (that is, continuously records or continuously shoots), and the shooting result obtained by the camera is a video that contains continuous multi-frame shooting pictures.
  • AI artificial intelligence
  • the user (for example, the installer of the camera, or The user) can set the capture range on the shooting screen in advance; when the camera captures the shooting screen and performs capture target detection on the shooting screen, when the camera detects the capture target in a certain shooting screen and detects that the capture target is located at
  • the capture target can be automatically captured to obtain an image including the capture target, that is, a capture frame or a capture image.
  • the camera In order to obtain better capture results, it is usually necessary to debug the camera.
  • the camera is usually allowed to shoot continuously to obtain the shooting picture, and by setting the debugging target, and then adjusting the shooting angle and shooting parameters of the camera, the debugging target is located in the ideal area of the camera shooting picture.
  • the user Before debugging the camera, the user (for example, a camera debugging worker) may first specify the capture target and the capture area in the monitoring scene.
  • the user can determine that the capturing target in the monitoring scene is a person . Moreover, after analysis, the user finds that people pass through area A with a high probability when entering the building, so the user can use area A as the capture area in this scene.
  • the user can determine that the target of the capturing in the monitoring scene is a vehicle. After analysis, the user finds that when the vehicle enters the parking lot, it usually stops in front of the barrier, and the license plate of the vehicle is likely to be in area B. Then, the user can use area B as the capture area in this scene.
  • the user can set a debugging target consistent with the capture target in the capture area of the monitoring scene.
  • a debugging target consistent with the capture target in the capture area of the monitoring scene.
  • the user may allow a certain person to be a debugging target, and allow the person who is the debugging target to be in area A.
  • the user may use a certain vehicle as a debugging target, and let the license plate of the vehicle used as the debugging target be located in area B.
  • the debugging target can always be located at the above-mentioned pre-defined capture area, so as to assist the user to complete the camera debugging.
  • the situation of the debugging target in the shooting picture reflects the situation of the captured target in the shooting picture when the camera is working subsequently. Therefore, by adjusting the shooting angle and shooting parameters of the camera, so that the angle, size and position of the debugging target are in an ideal state, the camera can capture the angle, size and position of the target in most cases during follow-up work. It is in an ideal state, so that the quality of the captured target is guaranteed.
  • the present application provides a method for debugging a camera, and the method for debugging a camera may be executed by a device for debugging a camera (referred to as a debugging device).
  • a debugging device may be the camera to be debugged itself, or any other device with computing and display functions, for example, it may be a computer, mobile phone or tablet computer, etc., which is not limited in this embodiment of the present application.
  • This application uses the application scenario shown in FIG. 1 as an example to introduce the method for debugging the camera.
  • the method for debugging the camera includes but is not limited to the following steps:
  • the debugging device can continuously and in real time acquire the shooting pictures obtained by the camera shooting the monitoring scene, so that the user can complete the debugging of the camera according to the shooting pictures. Since during the debugging process, a debugging target is set at a preset position of the monitoring scene (that is, a position in the pre-determined capture area), therefore, the shooting screen includes the debugging target. For example, in the scene shown in FIG. 1 , a person located in area A is included in the shooting picture.
  • the debugging device can communicate with the camera and receive the shooting pictures sent by the camera; screen.
  • the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the coincidence degree of the first area and the second area meets the needs of the user; wherein, the first area is where the debugging target in the shooting picture is located. area, and the second area is the area where the guide frame is located in the shooting screen.
  • the following describes the shape, size and position of the guide frame displayed on the shooting screen.
  • the shape of the guide frame displayed on the shooting screen may be determined according to the capture requirements. Specifically, multiple types of guide frames may be set according to capturing requirements, and each type of guide frame has a different shape. Then, the type of the guide frame can be determined according to the type of the debugging target, and the guide frame of this type can be displayed. For example, guide frames of the "person" type and "vehicle” type can be set in advance, wherein the shape (contour) of the "person” type guide frame can be the whole body or the upper body of a person, and the shape (contour) of the "vehicle” type guide frame can be for the vehicle. In this embodiment, the type of the debugging target is "person", so a guide frame of the type "person” can be displayed.
  • the debugging target can also include other types besides the above-mentioned "person” and "vehicle”.
  • more types of guide frames can be set, that is, more shapes of guide frames can be set , which is not limited in this application.
  • the shape of the guide frame displayed on the shooting screen may also be a preset fixed shape, which is not limited in this application.
  • the size of the guide frame displayed on the shooting screen may be determined according to the resolution of the camera. That is to say, the size of the guide frame can be adjusted according to the resolution of the camera, so that the size of the guide frame in the shooting pictures of different cameras can be all appropriate. It should be understood that when the resolution of the camera is higher, the size of the guide frame should be larger, and when the resolution of the camera is lower, the size of the guide frame should be smaller. In actual implementation, the corresponding relationship between different resolutions and different guide frame sizes (dimensions) can be preset, and guide frames of corresponding sizes can be displayed on the shooting screen according to the resolution of the camera to be debugged.
  • the size of the guide frame displayed on the shooting screen can also be set according to the pixel requirements of the capture target, that is, the size of the guide frame can be adjusted according to the pixel requirements of different capture targets in the actual scene, which can make Under different capture targets, the guide frame can meet its pixel requirements.
  • the size of the guide frame displayed on the shooting screen may also be a preset fixed size, which is not limited in this application.
  • the displayed position of the guide frame on the shooting screen may be a preset position according to actual needs, for example, the displayed position of the guide frame may be set according to the focus area of the camera.
  • the guide frame is located in the middle of the shooting picture, and from the up-down (vertical) dimension, the guide frame is located at the bottom of the shooting picture, for example, the focus area (second area) of the camera is The bottom of the shooting screen is in the middle, and the height is not greater than one-third of the height of the shooting screen.
  • the longitudinal centerline of the shooting screen and the longitudinal centerline of the second area are coincident, and the bottom of the shooting screen and the second area
  • This area is the most ideal focus area of the camera. Then, the position of the guide frame can be set in this area, so that the camera can focus on the area where the guide frame is located, and then the area where the guide frame is located can be made in the picture captured by the camera.
  • the display position of the guide frame on the shooting screen can be measured by the position of a certain point of the guide frame on the shooting screen, for example, the position on the outline of the guide frame can be measured A certain point, or the center point of the guide frame (the intersection of the horizontal midline and the vertical midline), which is not limited in this application.
  • the location of the above second area is generally an ideal focus area of the camera in the entire shooting picture, and setting the guide frame in this area can make the quality of the captured object in the shooting picture higher, thereby improving the effect of capturing.
  • the preferred area where the camera focuses will also change.
  • the specific position of the second area in the shooting picture can be adjusted according to the actual situation, which is not limited in this embodiment of the present application.
  • the displayed position of the guide frame on the shooting screen may be other positions preset according to actual needs, regardless of the focus area of the camera, which is not limited in this application.
  • the debugging device can display an interface for the user to input the type of the debugging target, the resolution of the camera, or the position of the required guide frame, so that the position of the guide frame can be determined based on the user's input. shape, size or position, so that a guide frame of corresponding shape and size is displayed at the corresponding position of the shooting screen.
  • the present application does not limit the display form of the guide frame on the shooting screen, as long as the user can intuitively recognize the guide frame in the shooting screen.
  • Several possible display forms of the guide frame on the shooting screen are introduced below.
  • the guide frame may be displayed in the form of a contour line, specifically, the contour line of the guide frame may be displayed on the shooting screen, for example, the second area 20 in FIG. 4 .
  • the contour lines may be dotted lines or solid lines of different thicknesses, types, and colors, etc., which are not limited in this embodiment of the present application.
  • the guiding frame may be displayed by setting different transparencies in the second area (ie, the area where the guiding frame is located) and the third area.
  • the third area is an area in the shooting frame other than the second area.
  • the transparency of the second area 20 and the third area 30 may be set to be different. It should be understood that the present application does not limit the specific transparency values of the second area and the third area, as long as the user can distinguish the second area and the third area through different transparency.
  • the second area is not completely transparent, it can also be filled, for example, it can be filled with a solid color, filled with a gradient, or filled with a pattern.
  • the third area may also be treated similarly, which is not limited in this embodiment of the present application. For example, in FIG. 5 , the debugging device fills the third area with oblique lines.
  • the guide frame is displayed on the shooting screen by superimposing a transparent picture with an alpha channel (alpha channel), so as to realize different transparency of the second area and the third area, wherein the alpha channel can be Used to adjust the translucency of an image, the camera debugging system superimposes a transparent image with an alpha channel on the shooting screen as a guide frame.
  • alpha channel alpha channel
  • the above method 1 and method 2 can also be used in combination, that is, while displaying the guide frame in the form of outlines, the second area and the third area can also be set with different transparency, so as to further enhance the guiding effect on the user .
  • the shape, size and position of the guide frame (that is, the second area) displayed on the shooting screen of the camera generally remain unchanged (that is, the content in the shooting screen can change, but the display on the shooting screen The bootstrap frame remains the same). Therefore, the overlapping of the above-mentioned first area (that is, the area where the debugging target is located in the shooting picture) and the second area is determined by the shape, size and position of the first area.
  • the shape and position of the first area are related to the shooting angle of the camera, and the size of the first area is related to the shooting parameters of the camera.
  • adjusting the shooting angle of the camera is equivalent to controlling the camera to rotate, and the range of the camera shooting the monitoring scene can be adjusted.
  • the shape and position of the area (ie, the first area) of the debugging target in the shooting picture can be adjusted.
  • Adjusting the shooting parameters of the camera is equivalent to controlling parameters such as the optical magnification of the camera lens, and can adjust the distance of the camera shooting the monitoring scene.
  • the size of the area (ie, the first area) of the debugging target in the shooting picture can be adjusted. Therefore, the user can continuously adjust the shooting angle and/or shooting parameters of the camera to increase the degree of overlap between the first area and the second area until the degree of overlap reaches the user's satisfaction.
  • the user can complete the debugging when he sees that the first area and the second area almost completely overlap.
  • the debugging device displays a guide frame on the shooting screen after acquiring the shooting screen
  • the user finds that the first area 10 where the debugging target is located and the second area 20 where the guide frame is located do not overlap.
  • the user adjusts the shooting angle and/or shooting parameters of the camera, as shown in FIG. 6B
  • the first area 10 and the second area 20 are almost completely overlapped.
  • the degree of coincidence meets the requirements, so the camera shooting angle and/or shooting parameters can be adjusted to complete the debugging of the camera.
  • the advantages of the above-mentioned solution of the present application are more obvious in the scene where there is a requirement for the pixels of the captured object in the shooting picture of the camera.
  • the pixels between the two ears of the face in the captured picture be 120px;
  • the user cannot directly judge whether the pixel value of the capture target in the shooting screen meets the requirements with the naked eye, so the user needs to manually adjust the optical magnification of the camera repeatedly to meet the capture requirements.
  • the user only needs to adjust the camera to make the shooting
  • the debugging target in the screen basically coincides with the guide frame, so that the pixel value of the capture target in the shooting screen of the camera can meet the requirements.
  • the focusing algorithm usually takes the central area of the shooting picture as a high-weight area for focusing, and the result of this is to make the picture quality of the central area of the shooting picture better.
  • the second area where the guide frame is located may not be in the central area of the screen.
  • the camera may focus on the background or other areas that the user does not care about. For example, in a low-light environment, in order to obtain better shooting results, the camera usually shoots with a large aperture and a shallow depth of field. This shooting method may cause the camera to focus on the background of the shooting picture and other things that the user does not care about. in the area.
  • the debugging device can also adjust the focus area of the camera, so that the camera uses the second area as the focus area, so that the definition of the second area is higher than that of the first area.
  • the second area where the guide frame is located is the area where the capture target is located. Therefore, in this way, the resolution of the area where the capture target is located in the shooting picture obtained by the camera during operation can be higher. , so that the capture effect can be improved, and the accuracy of subsequent intelligent analysis of the capture target can be improved.
  • the user can manually confirm the focus area.
  • the user triggers the camera to focus, or the debugging device automatically triggers the camera to focus.
  • This application implements For example, this is not limited.
  • the default focus area of the camera is the second area.
  • the user can confirm that the focus area of the camera is the current focus area through the debugging device.
  • the camera will use the focus area confirmed by the user as the focus area to shoot or capture during the subsequent working process of the camera. If the definition of the debugging target does not meet the user's needs after the camera is focused, the user can also use the debugging The device manually adjusts the current focus area and then confirms the focus area of the camera.
  • the debugging device when the debugging device is other equipment than the camera, the debugging device can also send the determined focus area to the camera, so that the camera can capture pictures according to the focus area during operation, thus obtaining Make the capture target clear and capture the picture.
  • the user can determine the capture range of the camera in the shooting screen through the debugging device.
  • the default capture range of the camera is the entire shooting screen, but there are cases where the camera also captures when the capture target is not in the focus area. , the quality of the capture result will be relatively poor, so the user can draw or adjust the capture range of the camera in the shooting screen through the debugging device, for example, the user sets the capture range to the bottom third of the shooting screen, and for example The capture range set by the user is the second area.
  • the capture target is within the capture range in the shooting screen, the camera will start capturing, so that the quality of the capture result can be significantly improved.
  • the camera when the camera is working, it may need to compress the captured pictures and send them to other devices.
  • the camera may encode the captured pictures in JPEG (joint photographic experts group) mode, or use standard encoding such as H.264 and send them to stored on the server. Therefore, also optionally, the debugging device can also determine the encoding parameters of the camera, so that the camera can use the area where the guide frame is located (that is, the second area) as the encoding parameter when encoding the shooting picture during subsequent work.
  • a region of interest region of interest, ROI
  • the definition of the second region is higher than the definition of other regions.
  • the above encoding parameters may be the indication information of the encoded ROI area; and, when the debugging device is a device other than the camera, the debugging device may also send the encoding parameters to the camera, so that the camera is Encode according to this encoding parameter during work. It should be understood that, in this way, the definition of the area where the captured target is located can also be made higher, thus the effect of capturing can be improved, and the accuracy of subsequent intelligent analysis of the captured target can be improved.
  • the method for debugging the camera includes but is not limited to the following steps:
  • the guide frame may be displayed on the shooting screen after the user triggers the debugging mode.
  • the debugging device includes a display screen, and a debugging interface and a shooting frame can be displayed on the display screen, wherein the shooting frame frame is set on the left side of the display screen, and the debugging interface is set on the right side of the display screen.
  • the shooting screen is displayed in the shooting screen frame.
  • the debugging device may respond to the user's first instruction indicating to enter the debugging mode, and start displaying the guide frame on the shooting screen.
  • the display screen to display the debugging interface and the shooting picture.
  • the debugging button virtual button
  • the shooting picture will start to be displayed in the shooting picture frame, and the guide frame will be displayed at the same time.
  • the return button on the display screen exits to re-display the adjustment interface, which is not limited in this embodiment of the present application.
  • the user can adjust the shooting angle and/or shooting parameters of the camera.
  • the user can adjust the shooting angle and shooting parameters of the camera, for example, move the camera to adjust the shooting angle, control the adjustment buttons of the camera to adjust the shooting parameters, and in order to make debugging more convenient for the user, the user can also input the The shooting angle and shooting parameters need to be adjusted, and the debugging device transmits these data to the camera, and the camera automatically adjusts the shooting angle and shooting parameters according to these data.
  • the debugging device can monitor the coincidence degree of the first area and the second area in real time, and output the coincidence degree.
  • the debugging device can identify the first area. Then, for example, according to the area proportion of the first area in the second area and the area proportion of the second area in the first area, the degree of overlap between the first area and the second area can be determined, and based on this, real-time Output coincidence. Since the degree of overlap quantifies the overlap or matching of the first area and the second area, it can reflect the overlap of the two areas more accurately. Therefore, in this possible implementation, the user can conveniently know the The precise coincidence of the two regions allows the user to debug according to the change of the coincidence, thus further improving the efficiency of debugging and improving the user experience.
  • coincidence degree between the first region and the second region may also be calculated in other ways, which is not limited in the present application.
  • the debugging device is also equipped with a sound, and the coincidence degree can be played and output in the form of voice, or displayed in the form of text, or output in both forms at the same time, so that the user can intuitively feel it in the process of debugging the camera.
  • the coincidence degree it is convenient for users to debug and improves user experience.
  • the coincidence degree when the coincidence degree is output in the form of text, the coincidence degree can be displayed on the shooting screen or on an interface outside the shooting screen, for example, displayed above the second area on the shooting screen, or displayed in the debugging interface , the coincidence degree can be displayed by text in various fonts, colors, or sizes, or the percentage represented by the coincidence degree can be displayed through a bar graph, which is not limited in this embodiment of the present application.
  • the coincidence degree is not higher than the first threshold, it means that the gap between the debugging target and the guide frame is still relatively obvious, and the user needs to adjust the shooting angle and/or shooting parameters of the camera.
  • the first prompt message is output to prompt the user to adjust the shooting angle of the camera.
  • the shooting angle and/or shooting parameters improve the guiding function for the user to debug the camera and improve the user experience.
  • the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the first prompt information during the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
  • the debugging device can also provide more detailed and targeted prompts to the user.
  • a second threshold can be set (the second threshold is less than the first threshold), and when the degree of overlap is less than the second threshold, the first prompt information can be information for prompting the user to adjust the shooting angle of the camera; and In a case where the coincidence degree is greater than the second threshold and less than the first threshold, the first prompt may be information for prompting the user to adjust shooting parameters of the camera.
  • the above-mentioned first threshold and second threshold may be preset values, which are not limited in this application.
  • the debugging device may output first prompt information prompting the user to adjust the shooting angle of the camera. For example, in the debugging interface, it displays "Please adjust the shooting angle of the camera so that the position of the debugging target is close to the guide frame", and at the same time play the video "Please adjust the shooting angle of the camera so that the position of the debugging target is close to the guiding frame”. voice.
  • the debugging device may output first prompt information prompting the user to adjust the shooting parameters of the camera. For example, "Please adjust the shooting parameters of the camera so that the debugging target coincides with the guiding frame" is displayed in the debugging interface, and at the same time the voice of "Please adjust the shooting parameters of the camera so that the debugging target coincides with the guiding frame" is played.
  • step S706 is executed.
  • the second prompt information can be played and output in the form of voice, or displayed and output in text form, and the user can intuitively feel the second prompt information in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
  • the debugging device can output a second prompt message, for example, display "commissioning completed” in the debugging interface , and at the same time play the voice of "debugging completed", the second prompt information indicates that the debugging of the camera has been completed, which improves the guiding function for the user to debug the camera and improves the user experience.
  • the debugging device When the debugging device outputs the second prompt information, the user can observe the relationship between the debugging target and the guide frame in the shooting screen at this time. It is determined to stop debugging in the debugging device. For example, after the debugging device outputs the second prompt message, the user clicks the button to stop debugging in the debugging interface. The second command is to stop displaying the guide frame on the shooting screen and complete the debugging of the camera.
  • the debugging device outputs the second prompt information
  • the user can also continue debugging according to the observed situation until the user's requirement is met, and details will not be repeated here.
  • the user can click the button to stop debugging under any circumstances to make the debugging device exit the debugging module.
  • the debugging device will control the camera to save the current shooting parameters and shooting angles, so that when the user returns to the debugging mode later, he can Continue the last debugging work.
  • the debugging device detects the degree of coincidence between the first region corresponding to the debugging target and the second region corresponding to the guide frame, and can output the degree of coincidence in various ways;
  • the coincidence degree outputs different prompt information. Therefore, compared with the embodiment shown in FIG. 3 , this embodiment can further enhance the guiding function for user debugging, thereby further improving the efficiency of user debugging of cameras and improving user experience.
  • the debugging device may also determine the focus area and/or encoding parameters of the camera, so that in the picture captured by the camera and the encoded picture, The area where the guide frame is located is clearer, which can improve the quality of the capture, so I won’t repeat it here.
  • the implementation manner shown in FIG. 7 can also be applied to a scene where there are multiple capture objects, so details will not be repeated here.
  • the embodiment of the present application also provides a device for debugging a camera (referred to as a debugging device) capable of implementing the above method for debugging a camera, which will be introduced below with reference to FIGS. 9-11 .
  • a debugging device capable of implementing the above method for debugging a camera
  • FIG. 9 is a schematic structural diagram of a debugging device 900 provided by the present application.
  • the debugging device 900 includes:
  • the obtaining module 901 is used to obtain the shooting picture obtained by shooting the monitoring scene by the camera.
  • the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the obtaining unit 901 can execute the above-mentioned Step S301 or step S701 in the method for debugging a camera.
  • the display module 902 is configured to display a guide frame on the shooting screen, and the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping degree of the first area and the second area meets the needs of the user.
  • the area is the area where the debugging target is located in the shooting screen
  • the second area is the area where the guide frame is located in the shooting screen.
  • the display module 902 may execute step S302 in the foregoing method embodiments.
  • the display module 902 may correspond to the display screen in the foregoing method embodiments.
  • the display module 902 displays a guide frame on the shooting screen of the camera, wherein the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the debugging target is located in the guide frame as much as possible, so that Under the condition of ensuring debugging efficiency and shooting effect, camera debugging is realized at low cost.
  • FIG. 10 is a schematic structural diagram of a debugging device 1000 provided by the present application.
  • the debugging device 1000 includes:
  • the obtaining module 1001 is used to obtain the shooting picture obtained by shooting the monitoring scene by the camera.
  • the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the obtaining module 1001 can execute the above-mentioned Step S301 or step S701 in the method embodiment.
  • the display module 1002 is configured to display a guide frame on the shooting screen, and the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping degree of the first area and the second area meets the needs of the user.
  • the area is the area where the debugging target is located in the shooting screen, and the second area is the area where the guide frame is located in the shooting screen.
  • the display module 1002 may execute step S302 or step S702 in the above method embodiments.
  • the display module 1002 may correspond to the display screen in the foregoing method embodiments.
  • the acquisition module 1001 is further configured to acquire the target type of the debugging target; the display module 1002 is specifically configured to display a guide frame on the shooting screen, and the type of the guide frame is determined based on the target type.
  • the acquiring module 1001 is further configured to acquire the resolution of the camera; the display module 1002 is further configured to display a guide frame on the shooting screen, and the size of the guide frame is determined based on the resolution.
  • the transparency of the second area and the third area are different, and the third area is an area in the shooting frame other than the second area.
  • the display module 1002 is specifically configured to display the outline of the second area on the shooting screen.
  • the camera debugging system 1000 further includes: a determination module 1003, configured to determine the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than that of the third area, and the third area is Capture the area of the frame other than the second area.
  • a determination module 1003 configured to determine the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than that of the third area, and the third area is Capture the area of the frame other than the second area.
  • the camera debugging system 1000 further includes: a detection module 1004, configured to detect the degree of overlap between the first area and the second area; the detection module 1004 can execute step S703 in the above method embodiment.
  • the display module 1002 is also used to output the coincidence degree.
  • the display module 1002 can execute step S703 in the above method embodiment.
  • the display module 1002 may correspond to the display screen and sound in the above method embodiments.
  • the display module 1002 is further configured to output first prompt information when the coincidence degree is not higher than the first threshold, and the first prompt information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera.
  • the display module 1002 may execute step S704 in the above method embodiment.
  • the display module 1002 may correspond to the display screen and sound in the above method embodiments.
  • the display module 1002 is further configured to output second prompt information when the coincidence degree is higher than the first threshold, and the second prompt information is used to indicate that the debugging of the camera has been completed.
  • the display module 1002 may execute step S705 in the above method embodiment.
  • the display module 1002 may correspond to the display screen and sound in the above method embodiments.
  • the display module 1002 is specifically further configured to display a guide frame on the shooting screen in response to a first instruction, where the first instruction is used to indicate entering the debugging mode.
  • the display module 1002 can execute step S702 in the above method embodiment.
  • the display module 1002 is specifically further configured to stop displaying the guide frame on the shooting screen in response to a second instruction, the second instruction being used to instruct to exit the debugging mode.
  • the display module 1002 may execute step S706 in the above method embodiment.
  • the position or shape of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameters.
  • the target type is human or vehicle.
  • the debugging device 1100 includes: a processor 1101, a communication interface 1102, a memory 1103, and a bus 1104.
  • the processor 1101 may include a CPU, or at least one of the CPU, GPU, NPU, and other types of processors.
  • the processor 1101 , the communication interface 1102 and the memory 1103 are connected to each other through a bus 1104 .
  • the processor 1101 is used to control and manage the actions of the computer device 1100, for example, the processor 1101 is used to execute the above-mentioned method for debugging a camera and/or other processes for the technologies described herein.
  • the communication interface 1102 is used to support the computer device 1100 to communicate.
  • the memory 1103 is used for storing program codes and data of the computer device 110 .
  • the processor 1101 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and the like.
  • the bus 1104 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the present application also provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when at least one processor of the device executes the computer-executable instructions, the device executes the method for debugging the camera described in the above-mentioned embodiments .
  • the present application also provides a computer program product, which includes computer-executable instructions stored in a computer-readable storage medium; at least one processor of the device can read the computer-executable instructions from the computer-readable storage medium. Instructions, at least one processor executes the computer-executed instructions so that the device executes the method for debugging a camera described in the foregoing embodiments.
  • the present application also provides a chip system, the chip system includes at least one processor and an interface, the interface is used to receive data and/or signals, and the at least one processor is used to support the implementation of the method for debugging the camera described in the above embodiments.
  • the system-on-a-chip may further include a memory, and the memory is used for storing necessary program instructions and data of the computer device.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, read-only memory), random access memory (RAM, random access memory), magnetic disk or optical disc, etc., which can store program codes. .

Abstract

A method for debugging a camera. The method comprises: acquiring a photographic picture which is obtained by a camera photographing a monitoring scene, wherein the photographic picture comprises a debugging target, which is a photographed object arranged in the monitoring scene and is used for debugging the camera; and displaying a guide box on the photographic picture, wherein the guide box is used for guiding a user to adjust the photographic angle and/or photographic parameter of the camera, such that the degree of coincidence between a first area and a second area meets requirements of the user, the first area being an area, where the debugging target is located, in the photographic picture, and the second area being an area, where the guide box is located, in the photographic picture. The method for debugging a camera is used for improving the efficiency of debugging a camera and improving the effect of debugging a camera.

Description

一种调试摄像机的方法及相关设备A method for debugging a video camera and related equipment
本申请要求于2022年1月11日提交中国国家知识产权局、申请号为202210028167.3、申请名称为“一种对抓拍摄像机快速调优的方法、装置和系统”的中国专利申请的优先权,以及要求于2022年3月28日提交中国国家知识产权局、申请号为202210312374.1、申请名称为“一种调试摄像机的方法及相关设备”的中国专利申请的优先权其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202210028167.3 and the application title "A Method, Device and System for Quickly Tuning Capture Cameras" submitted to the State Intellectual Property Office of China on January 11, 2022, and Priority is claimed for a Chinese patent application filed with the State Intellectual Property Office of China on March 28, 2022, with application number 202210312374.1, and the title of the application is "A Method for Debugging Cameras and Related Equipment", the entire contents of which are incorporated by reference in this application middle.
技术领域technical field
本申请实施例涉及摄像机领域,尤其涉及一种调试摄像机的方法及相关设备。The embodiments of the present application relate to the field of cameras, and in particular, to a method for debugging a camera and related equipment.
背景技术Background technique
随着摄像技术和人工智能(artificial intelligence,AI)的飞速发展,当前摄像机通常也支持AI能力,例如,摄像机可以识别目标、并可以抓拍识别到的目标。With the rapid development of camera technology and artificial intelligence (AI), current cameras usually also support AI capabilities, for example, the camera can identify targets and capture the recognized targets.
在摄像机工作之前,用户(例如,摄像机的安装工人)可以先将摄像机安装在合适的位置,从而让摄像机能够拍摄到预设的监控场景。然后,用户需要对摄像机进行调试,使得该摄像机后续在工作时,能以较高的质量抓拍到该监控场景中的目标(例如,人或车辆)。在该调试的过程中,通常在该监控场景中预先设置作为拍摄目标的调试标靶,然后由用户基于摄像机的拍摄画面和人工经验反复多次调整摄像机的拍摄角度和拍摄参数,以使得调试标靶在该摄像机的拍摄画面中的位置、大小和角度等满足要求。Before the camera works, the user (for example, a camera installer) can first install the camera at a suitable position, so that the camera can capture a preset monitoring scene. Then, the user needs to debug the camera so that the camera can capture objects (for example, people or vehicles) in the monitoring scene with higher quality when the camera is working later. In the debugging process, the debugging target as the shooting target is usually preset in the monitoring scene, and then the user repeatedly adjusts the shooting angle and shooting parameters of the camera based on the shooting pictures of the camera and manual experience, so that the debugging target The position, size and angle of the target in the shooting picture of the camera meet the requirements.
可见,在该调试过程中,效率极低;并且,由于该过程是基于人工经验来调整的,最终的调试效果也得不到保证。It can be seen that in this debugging process, the efficiency is extremely low; moreover, since this process is adjusted based on manual experience, the final debugging effect cannot be guaranteed.
发明内容Contents of the invention
本申请实施例提供一种调试摄像机的方法及相关设备,用于提升摄像机的调试效率和改善摄像机的调试效果。本申请实施例还提供了相应的计算机可读存储介质及调试摄像机的装置等。Embodiments of the present application provide a method for debugging a camera and related equipment, which are used to improve the debugging efficiency of the camera and improve the debugging effect of the camera. The embodiment of the present application also provides a corresponding computer-readable storage medium, a device for debugging a camera, and the like.
本申请第一方面提供一种调试摄像机的方法,包括:获取摄像机拍摄监控场景得到的拍摄画面,拍摄画面中包括调试标靶,调试标靶为设置于监控场景中的、用于调试所述摄像机的拍摄目标;在拍摄画面上显示引导框,引导框用于引导用户调整摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足用户的需求,第一区域为拍摄画面中调试标靶所在的区域,第二区域为拍摄画面中引导框所在的区域。The first aspect of the present application provides a method for debugging a camera, including: obtaining a shooting picture obtained by shooting a monitoring scene with the camera, the shooting picture includes a debugging target, and the debugging target is set in the monitoring scene and is used for debugging the camera The shooting target; display the guide frame on the shooting screen, the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the overlap between the first area and the second area meets the user's needs, the first area is The area where the debugging target is located in the shooting screen, and the second area is the area where the guide frame is located in the shooting screen.
摄像机在调试时持续的获取拍摄画面,使用户基于该拍摄画面和拍摄画面中的引导框对摄像机进行调试,摄像机调试完成开始工作时,对抓拍目标进行抓拍,得到抓拍画面。上述引导框所在的区域可以理解为摄像机在工作时抓拍目标在拍摄画面中理想或期望的区域,因而,上述第一区域与第二区域的重合度越高,则说明摄像机的拍摄角度和拍摄参数越接近理想的角度和参数。因而,在上述第一方面中,通过在摄像机的拍摄画面上显示该引导框,使得用户在调试的过程中,直观地看到拍摄画面中的调试标靶和引导框的比对情况,因而引导用户快速调整摄像机的拍摄角度和/或拍摄参数,使得得到的拍摄画面中调试标靶与引导框尽 量重合,从而提升摄像机的调试效率。并且,在该方案中,由于各用户都按照引导框来对摄像机进行调试,标准统一,因而可以避免因不同用户的人工经验的参差不齐而导致摄像机的调试效果得不到保证的问题。When the camera is being debugged, it continuously obtains the shooting images, so that the user can debug the camera based on the shooting images and the guide frame in the shooting images. The area where the above-mentioned guide frame is located can be understood as the ideal or expected area in the shooting picture captured by the camera when it is working. Therefore, the higher the coincidence degree between the first area and the second area above, the better the shooting angle and shooting parameters of the camera are. The closer to the ideal angle and parameters. Therefore, in the first aspect above, by displaying the guide frame on the shooting screen of the camera, the user can intuitively see the comparison between the debugging target and the guide frame in the shooting screen during the debugging process, thus guiding The user quickly adjusts the shooting angle and/or shooting parameters of the camera, so that the debugging target and the guide frame in the obtained shooting picture overlap as much as possible, thereby improving the debugging efficiency of the camera. Moreover, in this solution, since each user debugs the camera according to the guide frame, the standard is unified, so the problem that the debugging effect of the camera cannot be guaranteed due to the uneven manual experience of different users can be avoided.
在第一方面的一种可能的实现方式中,在拍摄画面上显示引导框包括:获取调试标靶的标靶类型,在拍摄画面上显示引导框,引导框的类型基于标靶类型确定。In a possible implementation manner of the first aspect, displaying the guide frame on the shooting screen includes: acquiring a target type of the debugging target, and displaying the guide frame on the shooting screen, where the type of the guide frame is determined based on the target type.
由于调试标靶通常是基于实际的抓拍需求来设定的,例如,如果要对监控场景中的人进行抓拍,则该调试标靶是人(此时,标靶类型是人),如果要对监控场景中的车辆进行抓拍,则该调试标靶是车辆(此时,标靶类型是车辆)。因而,在该种可能的实现方式中,可以基于不同的标靶类型确定出不同类型的引导框,而不同类型的引导框的形状不同,因而使得引导框的轮廓与实际抓拍目标的轮廓更加匹配,即,使得理想或期望的区域更加精确,因而可以进一步提升调试效果。Since the debugging target is usually set based on the actual capture requirements, for example, if you want to capture a person in the monitoring scene, the debugging target is a person (at this time, the target type is a person), if you want to If the vehicle in the monitoring scene is captured, the debugging target is a vehicle (at this time, the target type is a vehicle). Therefore, in this possible implementation, different types of guide frames can be determined based on different target types, and different types of guide frames have different shapes, thus making the outline of the guide frame more closely match the outline of the actual captured target , that is, the ideal or expected area is made more accurate, and thus the debugging effect can be further improved.
在第一方面的一种可能的实现方式中,上述步骤:在拍摄画面上显示引导框包括:获取摄像机的分辨率;在拍摄画面上显示引导框,引导框的大小基于分辨率确定。In a possible implementation manner of the first aspect, the above step of: displaying the guide frame on the shooting screen includes: obtaining the resolution of the camera; displaying the guide frame on the shooting screen, and the size of the guide frame is determined based on the resolution.
该种可能的实现方式中,基于不同的分辨率确定出不同大小的引导框,使得引导框的大小与摄像机的分辨率相匹配,从而使得对于不同分辨率的拍摄画面,引导框都能以合适的大小显示在该拍摄画面上,因而可以提升方案的适用性和调试效果。In this possible implementation, guide frames of different sizes are determined based on different resolutions, so that the size of the guide frame matches the resolution of the camera, so that the guide frame can be used for shooting pictures with different resolutions. The size of is displayed on the shooting screen, so the applicability and debugging effect of the solution can be improved.
在第一方面的一种可能的实现方式中,上述步骤:在拍摄画面上显示引导框,包括:在拍摄画面的预设区域显示引导框。例如,可以根据摄像机的聚焦区域来设置第二区域。具体地,从左右(横向)的维度上看,引导框位于拍摄画面的中间,从上下(纵向)的维度上看,引导框位于拍摄画面的底部,例如摄像机的聚焦区域(第二区域)为拍摄画面的底部中间,且高度不大于拍摄画面高度的三分之一,该区域是摄像机较为理想的聚焦区域,那么,如果将引导框设置在该区域,从而可以使得摄像机对引导框所在的区域进行聚焦,进而可以使得摄像机拍摄的画面中,引导框所在的区域(即摄像机工作时抓拍目标所在的区域)更加清晰,抓拍目标的图像质量最好,因而可以提升抓拍的效果。In a possible implementation manner of the first aspect, the above step of: displaying the guide frame on the shooting screen includes: displaying the guide frame in a preset area of the shooting screen. For example, the second area may be set according to the focus area of the camera. Specifically, from the left-right (horizontal) dimension, the guide frame is located in the middle of the shooting picture, and from the up-down (vertical) dimension, the guide frame is located at the bottom of the shooting picture, for example, the focus area (second area) of the camera is In the middle of the bottom of the shooting screen, and the height is not greater than one-third of the height of the shooting screen, this area is an ideal focus area for the camera. Then, if the guide frame is set in this area, the camera can see the area where the guide frame is located. Focusing can further make the area where the guide frame is located (that is, the area where the capture target is located when the camera is working) clearer in the picture captured by the camera, and the image quality of the capture target is the best, so the capture effect can be improved.
在第一方面的一种可能的实现方式中,第二区域和第三区域的透明度不同,第三区域为拍摄画面中除第二区域之外的区域。In a possible implementation manner of the first aspect, the transparency of the second area and the third area are different, and the third area is an area in the shooting frame other than the second area.
该种可能的实现方式中,拍摄画面中引导框对应的第二区域和剩余的第三区域做区分显示,增强了对用户的引导功能,提升了用户体验。In this possible implementation manner, the second area corresponding to the guide frame and the remaining third area in the shooting screen are displayed separately, which enhances the guidance function for the user and improves the user experience.
在第一方面的一种可能的实现方式中,上述步骤:在拍摄画面上显示引导框包括:在拍摄画面上显示第二区域的轮廓。In a possible implementation manner of the first aspect, the above step of: displaying the guide frame on the shooting screen includes: displaying an outline of the second region on the shooting screen.
该种可能的实现方式中,相当于在拍摄画面上显示第二区域时,通过显示第二区域的轮廓来显示引导框,增强了对用户的引导功能,提升了用户体验。In this possible implementation manner, it is equivalent to displaying the guide frame by displaying the outline of the second area when the second area is displayed on the shooting screen, which enhances the guiding function for the user and improves the user experience.
在第一方面的一种可能的实现方式中,该方法还包括:确定摄像机的聚焦区域和/或编码参数,以使第二区域的清晰度高于第三区域的清晰度,第三区域为拍摄画面中除第二区域之外的区域。In a possible implementation manner of the first aspect, the method further includes: determining the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than the definition of the third area, and the third area is Capture the area of the frame other than the second area.
应理解,当完成调试后,引导框所在的第二区域为抓拍目标所在的区域,因而在该种可能的实现方式中,使摄像机在工作时得到的拍摄画面中,抓拍目标所在区域的清晰度高于其余区域的清晰度,因而可以提升抓拍的效果。It should be understood that after the debugging is completed, the second area where the guide frame is located is the area where the captured target is located. Therefore, in this possible implementation mode, in the shooting picture obtained by the camera during operation, the clarity of the area where the captured target is located The sharpness is higher than that of other areas, which can improve the effect of capture.
在第一方面的一种可能的实现方式中,上述步骤:在拍摄画面上显示引导框之后,方法还包括:检测第一区域和第二区域的重合度;输出重合度。In a possible implementation manner of the first aspect, the above step: after displaying the guide frame on the shooting screen, the method further includes: detecting a degree of overlap between the first area and the second area; and outputting the degree of overlap.
由于重合度将第一区域和第二区域的重合或匹配情况进行了量化,可以较为精确地反映 该两个区域的重合情况,因而在该种可能的实现方式中,可以让用户方便地知道该两个区域的精确的重合情况,并可以让用户根据该重合度的变化情况进行调试,因而可以进一步提升调试的效率,且可以提升用户体验。Since the degree of overlap quantifies the overlap or matching of the first area and the second area, it can reflect the overlap of the two areas more accurately. Therefore, in this possible implementation, the user can conveniently know the The precise coincidence of the two regions allows the user to debug according to the change of the coincidence, thus further improving the efficiency of debugging and improving the user experience.
可选的,该重合度可以以语音形式播放输出,或者以文字形式显示输出,用户在调试摄像机的过程中可以直观的感受到当前的重合度,便于用户调试,提高了用户体验。Optionally, the coincidence degree can be played and output in the form of voice, or displayed in the form of text, and the user can intuitively feel the current coincidence degree in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
在第一方面的一种可能的实现方式中,上述步骤:检测第一区域和第二区域的重合度之后,该方法还包括:在重合度不高于第一阈值的情况下,输出第一提示信息,第一提示信息用于提示用户调整摄像机的拍摄角度和/或拍摄参数。In a possible implementation manner of the first aspect, the above step: after detecting the degree of overlap between the first area and the second area, the method further includes: when the degree of overlap is not higher than the first threshold, outputting the first Prompt information, the first prompt information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera.
该种可能的实现方式中,提高对用户调试摄像机的引导作用,可以提升用户的体验。In this possible implementation manner, improving the guiding function for the user to debug the camera can improve the user's experience.
可选地,该第一提示信息可以以语音形式播放输出,或者以文字形式显示输出,用户在调试摄像机的过程中可以直观的感受到第一提示信息,便于用户调试,提高了用户体验。Optionally, the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the first prompt information during the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
在第一方面的一种可能的实现方式中,在重合度小于第二阈值的情况下,第一提示信息用于提示用户调整摄像机的拍摄角度;在重合度大于或等于第二阈值且小于第一阈值的情况下,第一提示用于提示用户调整摄像机的拍摄参数。In a possible implementation of the first aspect, when the coincidence degree is less than the second threshold, the first prompt information is used to prompt the user to adjust the shooting angle of the camera; In the case of a threshold value, the first prompt is used to prompt the user to adjust the shooting parameters of the camera.
该种可能的实现方式中,根据不同的重合度对用户发出不同的提示信息,进一步提高对用户调试摄像机的引导作用,因而可以进一步提升调试的效率,且可以提升用户调试的体验。In this possible implementation, different prompt information is sent to the user according to different overlapping degrees, which further improves the guiding function for the user to debug the camera, thereby further improving the debugging efficiency and the user debugging experience.
在第一方面的一种可能的实现方式中,上述步骤:检测第一区域和第二区域的重合度之后,该方法还包括:在重合度高于第一阈值的情况下,输出第二提示信息,第二提示信息用于指示已完成摄像机的调试。In a possible implementation of the first aspect, the above step: after detecting the degree of overlap between the first area and the second area, the method further includes: outputting a second prompt when the degree of overlap is higher than the first threshold information, and the second prompt information is used to indicate that the debugging of the camera has been completed.
该种可能的实现方式中,也可以提高对用户调试摄像机的引导作用,可以提升用户的体验。In this possible implementation manner, the guiding function for the user to debug the camera can also be improved, and the user experience can be improved.
可选地,该第一提示信息可以以语音形式播放输出,或者以文字形式显示输出,用户在调试摄像机的过程中可以直观的感受到第二提示信息,便于用户调试,提高了用户体验。Optionally, the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the second prompt information in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
在第一方面的一种可能的实现方式中,上述步骤:在拍摄画面上显示引导框,包括:响应于第一指令,在拍摄画面上显示引导框,第一指令用于指示进入调试模式。In a possible implementation manner of the first aspect, the above step of: displaying the guide frame on the shooting screen includes: displaying the guide frame on the shooting screen in response to a first instruction, the first instruction being used to indicate entering the debugging mode.
该种可能的实现方式中,在用户发出用于指示进入调试模式的第一指令后,在拍摄画面上显示引导框,提升了方案的可实现性。In this possible implementation manner, after the user issues the first instruction for instructing to enter the debugging mode, a guide frame is displayed on the shooting screen, which improves the feasibility of the solution.
在第一方面的一种可能的实现方式中,上述步骤:响应于第一指令,在拍摄画面上显示引导框之后,方法还包括:响应于第二指令,停止在拍摄画面上显示引导框,第二指令用于指示退出调试模式。In a possible implementation manner of the first aspect, the above steps: after displaying the guide frame on the shooting screen in response to the first instruction, the method further includes: in response to the second instruction, stop displaying the guide frame on the shooting screen, The second command is used to indicate to exit the debug mode.
该种可能的实现方式中,在用户发出用于指示退出调试模式的第二指令后,停止在拍摄画面上显示引导框,提升了方案的可实现性。In this possible implementation manner, after the user issues the second instruction for instructing to exit the debugging mode, the display of the guide frame on the shooting screen is stopped, which improves the feasibility of the solution.
在第一方面的一种可能的实现方式中,第一区域的位置或角度与拍摄角度相关,第一区域的大小与拍摄参数相关。In a possible implementation manner of the first aspect, the position or angle of the first area is related to a shooting angle, and the size of the first area is related to a shooting parameter.
本申请第二方面提供了一种调试摄像机的装置,用于执行上述第一方面或第一方面的任意可能的实现方式中的方法。具体地,该摄像机调试系统包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的模块或单元,如:获取模块、显示模块、确定模块和检测模块。The second aspect of the present application provides an apparatus for debugging a camera, which is used to execute the method in the above first aspect or any possible implementation manner of the first aspect. Specifically, the camera debugging system includes modules or units for executing the method in the first aspect or any possible implementation manner of the first aspect, such as: an acquisition module, a display module, a determination module, and a detection module.
本申请第三方面提供一种调试摄像机的装置,其包括处理器和存储器,存储器用于存储程序;处理器通过执行程序用于实现第一方面或第一方面的任意可能的实现方式中的方法。The third aspect of the present application provides a device for debugging a camera, which includes a processor and a memory, and the memory is used to store a program; the processor is used to implement the method in the first aspect or any possible implementation of the first aspect by executing the program .
本申请第四方面提供一种存储一个或多个计算机执行指令的计算机可读存储介质,当计 算机执行指令被处理器执行时,处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。The fourth aspect of the present application provides a computer-readable storage medium that stores one or more computer-executable instructions. When the computer-executable instructions are executed by a processor, the processor executes any one of the above-mentioned first aspect or the first aspect. method of implementation.
本申请第五方面提供一种存储一个或多个计算机执行指令的计算机程序产品,当计算机执行指令被处理器执行时,处理器执行如上述第一方面或第一方面任意一种可能的实现方式的方法。The fifth aspect of the present application provides a computer program product that stores one or more computer-executable instructions. When the computer-executable instructions are executed by a processor, the processor executes any one of the above-mentioned first aspect or any possible implementation of the first aspect. Methods.
本申请第六方面提供了一种芯片系统,该芯片系统包括至少一个处理器和接口,该接口用于接收数据和/或信号,至少一个处理器用于支持计算机设备实现上述第一方面或第一方面任意一种可能的实现方式中所涉及的功能。在一种可能的设计中,芯片系统还可以包括存储器,存储器,用于保存计算机设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。The sixth aspect of the present application provides a chip system, the chip system includes at least one processor and an interface, the interface is used to receive data and/or signals, and the at least one processor is used to support the computer device to implement the above first aspect or the first Functions involved in any possible implementation of the aspect. In a possible design, the system-on-a-chip may further include a memory, and the memory is used for storing necessary program instructions and data of the computer device. The system-on-a-chip may consist of chips, or may include chips and other discrete devices.
应理解,上述第二方面至第六方面及其各种实现方式的具体描述,可以参考上述第一方面及其各种实现方式中的详细描述;并且,第二方面到第六方面及其各种实现方式的有益效果,可以参考第一方面及其各种实现方式中的有益效果分析,此处不再赘述。It should be understood that, for the specific descriptions of the above-mentioned second aspect to the sixth aspect and various implementation modes thereof, reference may be made to the detailed descriptions in the above-mentioned first aspect and its various implementation modes; and, the second aspect to the sixth aspect and their various implementation modes For the beneficial effects of this implementation manner, reference may be made to the first aspect and the analysis of beneficial effects in various implementation manners thereof, and details are not repeated here.
附图说明Description of drawings
图1为本申请实施例提供的摄像机进行智能目标抓拍的一场景示意图;FIG. 1 is a schematic diagram of a scene where a camera provided by an embodiment of the present application performs smart target capture;
图2为本申请实施例提供的摄像机进行智能目标抓拍的另一场景示意图;FIG. 2 is a schematic diagram of another scene where the camera provided by the embodiment of the present application performs smart target capture;
图3为本申请实施例提供的调试摄像机的方法的一实施例示意图;FIG. 3 is a schematic diagram of an embodiment of a method for debugging a camera provided in an embodiment of the present application;
图4为本申请实施例提供的拍摄画面一实施例示意图;FIG. 4 is a schematic diagram of an embodiment of a shooting screen provided by an embodiment of the present application;
图5为本申请实施例提供的拍摄画面另一实施例示意图;Fig. 5 is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application;
图6A为本申请实施例提供的拍摄画面另一实施例示意图;FIG. 6A is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application;
图6B为本申请实施例提供的拍摄画面另一实施例示意图;FIG. 6B is a schematic diagram of another embodiment of the shooting screen provided by the embodiment of the present application;
图7为本申请实施例提供的调试摄像机的方法的另一实施例示意图;FIG. 7 is a schematic diagram of another embodiment of a method for debugging a camera provided in an embodiment of the present application;
图8为本申请实施例提供的显示屏显示内容的一实施例示意图;Fig. 8 is a schematic diagram of an embodiment of the content displayed on the display screen provided by the embodiment of the present application;
图9为本申请实施例提供的调试摄像机的装置的一实施例示意图;FIG. 9 is a schematic diagram of an embodiment of a device for debugging a camera provided in an embodiment of the present application;
图10为本申请实施例提供的调试摄像机的装置的另一实施例示意图;FIG. 10 is a schematic diagram of another embodiment of a device for debugging a camera provided in an embodiment of the present application;
图11为本申请实施例提供的调试摄像机的装置的另一实施例示意图。Fig. 11 is a schematic diagram of another embodiment of the device for debugging a camera provided by the embodiment of the present application.
具体实施方式Detailed ways
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。Embodiments of the present application are described below in conjunction with the accompanying drawings. Apparently, the described embodiments are only part of the embodiments of the present application, not all of the embodiments. Those of ordinary skill in the art know that, with the development of technology and the emergence of new scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second" and the like in the specification and claims of the present application and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, a process, method, system, product or device comprising a sequence of steps or elements is not necessarily limited to the expressly listed instead, may include other steps or elements not explicitly listed or inherent to the process, method, product or apparatus.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。In addition, in order to better illustrate the present application, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present application may be practiced without certain of the specific details. In some instances, methods, means, components and circuits well known to those skilled in the art have not been described in detail in order to highlight the gist of the present application.
本申请实施例提供一种调试摄像机的方法及相关设备,用于提升摄像机的调试效率和改善摄像机的调试效果。本申请实施例还提供了相应的计算机可读存储介质及调试摄像机的装置等。以下分别进行详细说明。Embodiments of the present application provide a method for debugging a camera and related equipment, which are used to improve the debugging efficiency of the camera and improve the debugging effect of the camera. The embodiment of the present application also provides a corresponding computer-readable storage medium, a device for debugging a camera, and the like. Each will be described in detail below.
下面对本申请实施例的应用场景举例说明。The application scenarios of the embodiments of the present application are illustrated below with examples.
随着摄像机技术的快速发展,摄像机的应用也越来越广泛,支持人工智能(artificial intelligence,AI)能力的摄像机可以实现智能目标抓拍。具体地,摄像机工作时持续的对监控场景进行拍摄(即持续录像或持续摄像),摄像机进行拍摄得到的拍摄结果为一段包含连续多帧拍摄画面的视频,用户(比如,摄像机的安装人员,或者使用人员)可以提前设置位于拍摄画面上的抓拍范围;当摄像机拍摄获取拍摄画面,并对拍摄画面进行抓拍目标检测,当摄像机在某个拍摄画面中检测到抓拍目标、且检测到该抓拍目标位于拍摄画面中预设的抓拍范围时,可以自动对该抓拍目标进行抓拍,得到包含抓拍目标的一张图像,即抓拍画面或抓拍图像。With the rapid development of camera technology, the application of cameras is becoming more and more extensive, and cameras supporting artificial intelligence (AI) capabilities can realize intelligent target capture. Specifically, when the camera is working, it continuously shoots the monitoring scene (that is, continuously records or continuously shoots), and the shooting result obtained by the camera is a video that contains continuous multi-frame shooting pictures. The user (for example, the installer of the camera, or The user) can set the capture range on the shooting screen in advance; when the camera captures the shooting screen and performs capture target detection on the shooting screen, when the camera detects the capture target in a certain shooting screen and detects that the capture target is located at When shooting the preset capture range in the picture, the capture target can be automatically captured to obtain an image including the capture target, that is, a capture frame or a capture image.
为了得到较优的抓拍结果,通常需要对摄像机进行调试。在调试的过程中,通常让摄像机持续的进行拍摄,获取拍摄画面,并通过设置调试标靶,然后调整摄像机的拍摄角度和拍摄参数,使得调试标靶位于摄像机拍摄画面中的理想区域。In order to obtain better capture results, it is usually necessary to debug the camera. During the debugging process, the camera is usually allowed to shoot continuously to obtain the shooting picture, and by setting the debugging target, and then adjusting the shooting angle and shooting parameters of the camera, the debugging target is located in the ideal area of the camera shooting picture.
在调试摄像机之前,用户(例如,可以是摄像机的调试工人)可以先明确监控场景中的抓拍目标和抓拍区域。Before debugging the camera, the user (for example, a camera debugging worker) may first specify the capture target and the capture area in the monitoring scene.
示例性的,如图1所示,假设摄像机C的监控场景是某楼宇入口,具体地,摄像机C需要对进入某楼宇的人群进行抓拍,那么,用户可以确定该监控场景下的抓拍目标是人。并且,经过分析,用户发现人在进入该楼宇时大概率经过区域A,那么,用户可以将区域A作为该场景下的抓拍区域。Exemplarily, as shown in FIG. 1 , assuming that the monitoring scene of camera C is the entrance of a certain building, specifically, camera C needs to capture the crowd entering a certain building, then the user can determine that the capturing target in the monitoring scene is a person . Moreover, after analysis, the user finds that people pass through area A with a high probability when entering the building, so the user can use area A as the capture area in this scene.
示例性的,如图2所示,假设摄像机C的监控场景是某停车场,并且需要对进入该停车场的车辆进行抓拍,那么,用户可以确定该监控场景下的抓拍目标是车辆。经过分析,用户发现车辆在进入该停车场时,通常会停在道闸前,且车辆的车牌大概率处于区域B,那么,用户可以将区域B作为该场景下的抓拍区域。Exemplarily, as shown in FIG. 2 , assuming that the monitoring scene of the camera C is a certain parking lot, and a vehicle entering the parking lot needs to be captured, the user can determine that the target of the capturing in the monitoring scene is a vehicle. After analysis, the user finds that when the vehicle enters the parking lot, it usually stops in front of the barrier, and the license plate of the vehicle is likely to be in area B. Then, the user can use area B as the capture area in this scene.
在明确了监控场景中的抓拍目标和抓拍区域后,用户可以在监控场景的抓拍区域处设置与抓拍目标一致的调试标靶。例如,在图1所示的场景中,用户可以让某个人作为调试标靶,并让该作为调试标靶的人处于区域A。在图2所示的场景中,用户可以让某个车辆作为调试标靶,并让该作为调试标靶的车辆的车牌位于区域B。在调试的过程中,可以让调试标靶始终位于上述预先明确的抓拍区域处,以协助用户完成摄像机的调试。After specifying the capture target and capture area in the monitoring scene, the user can set a debugging target consistent with the capture target in the capture area of the monitoring scene. For example, in the scenario shown in FIG. 1 , the user may allow a certain person to be a debugging target, and allow the person who is the debugging target to be in area A. In the scenario shown in FIG. 2 , the user may use a certain vehicle as a debugging target, and let the license plate of the vehicle used as the debugging target be located in area B. During the debugging process, the debugging target can always be located at the above-mentioned pre-defined capture area, so as to assist the user to complete the camera debugging.
应理解,调试标靶在拍摄画面中的情况,反映的是摄像机后续在工作时抓拍目标在拍摄画面中的情况。因而,通过调整摄像机的拍摄角度和拍摄参数,使得调试标靶的角度,大小和位置等分别处于理想的状态,就可以使得摄像机在后续工作时大部分情况下抓拍目标的角度,大小和位置等处于理想的状态,因而使得抓拍目标的质量得到保证。It should be understood that the situation of the debugging target in the shooting picture reflects the situation of the captured target in the shooting picture when the camera is working subsequently. Therefore, by adjusting the shooting angle and shooting parameters of the camera, so that the angle, size and position of the debugging target are in an ideal state, the camera can capture the angle, size and position of the target in most cases during follow-up work. It is in an ideal state, so that the quality of the captured target is guaranteed.
本申请提供一种调试摄像机的方法,该调试摄像机的方法可以由调试摄像机的装置(简称调试装置)执行。应理解,该调试装置可以是待调试的摄像机本身,也可以是其他任意具备计算和显示功能的设备,例如,可以是电脑,手机或平板电脑等等,本申请实施例对此不作限制。The present application provides a method for debugging a camera, and the method for debugging a camera may be executed by a device for debugging a camera (referred to as a debugging device). It should be understood that the debugging device may be the camera to be debugged itself, or any other device with computing and display functions, for example, it may be a computer, mobile phone or tablet computer, etc., which is not limited in this embodiment of the present application.
本申请以图1所示的应用场景为例,对该调试摄像机的方法进行介绍。This application uses the application scenario shown in FIG. 1 as an example to introduce the method for debugging the camera.
下面结合图3-图6B对本申请提供的调试摄像机的方法的一种实施方式进行说明。An implementation manner of the method for debugging a camera provided by the present application will be described below with reference to FIGS. 3-6B .
如图3所示,该调试摄像机的方法包括但不限于如下步骤:As shown in Figure 3, the method for debugging the camera includes but is not limited to the following steps:
S301、获取摄像机拍摄监控场景得到的拍摄画面。S301. Obtain a shooting picture obtained by shooting a monitoring scene by a camera.
在调试的过程中,调试装置可以持续实时地获取摄像机拍摄监控场景得到的拍摄画面,以便于用户根据该拍摄画面完成对该摄像机的调试。由于在调试的过程中,在监控场景的预设的位置(即预先明确的抓拍区域内的位置)上设有调试标靶,因而,该拍摄画面中包括调试标靶。例如,在图1所示的场景中,拍摄画面中包括位于区域A的人。During the debugging process, the debugging device can continuously and in real time acquire the shooting pictures obtained by the camera shooting the monitoring scene, so that the user can complete the debugging of the camera according to the shooting pictures. Since during the debugging process, a debugging target is set at a preset position of the monitoring scene (that is, a position in the pre-determined capture area), therefore, the shooting screen includes the debugging target. For example, in the scene shown in FIG. 1 , a person located in area A is included in the shooting picture.
应理解,当调试装置是不同于该摄像机的其他设备时,调试装置可以与该摄像机通信,接收该摄像机发送的拍摄画面;当调试装置是该摄像机本身时,调试装置可以直接拍摄监控场景得到拍摄画面。It should be understood that when the debugging device is other equipment different from the camera, the debugging device can communicate with the camera and receive the shooting pictures sent by the camera; screen.
S302、在拍摄画面上显示引导框。S302. Display the guide frame on the shooting screen.
其中,该引导框用于引导用户调整摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足用户的需求;其中,第一区域为拍摄画面中调试标靶所在的区域,第二区域为拍摄画面中引导框所在的区域。Wherein, the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the coincidence degree of the first area and the second area meets the needs of the user; wherein, the first area is where the debugging target in the shooting picture is located. area, and the second area is the area where the guide frame is located in the shooting screen.
下面对引导框在拍摄画面上显示的形状、大小和位置分别进行介绍。The following describes the shape, size and position of the guide frame displayed on the shooting screen.
示例性地,引导框在拍摄画面上显示的形状可以通过抓拍需求来确定。具体地,可以根据抓拍需求设置多种类型的引导框,每种类型的引导框的形状不同。然后,可以根据调试标靶的类型确定引导框的类型,并显示该类型的引导框。例如,可以提前设置“人”类型和“车”类型的引导框,其中,“人”类型的引导框的形状(轮廓)可以为人的全身或人的上半身,“车”类型的引导框的形状为车辆。在该实施例中,调试标靶的类型为“人”,因而可以显示“人”类型的引导框。Exemplarily, the shape of the guide frame displayed on the shooting screen may be determined according to the capture requirements. Specifically, multiple types of guide frames may be set according to capturing requirements, and each type of guide frame has a different shape. Then, the type of the guide frame can be determined according to the type of the debugging target, and the guide frame of this type can be displayed. For example, guide frames of the "person" type and "vehicle" type can be set in advance, wherein the shape (contour) of the "person" type guide frame can be the whole body or the upper body of a person, and the shape (contour) of the "vehicle" type guide frame can be for the vehicle. In this embodiment, the type of the debugging target is "person", so a guide frame of the type "person" can be displayed.
应理解,根据实际抓拍需求,调试标靶还可以包括除上述“人”和“车”之外其他类型,相应地,引导框也可以设置更多的类型,即可以设置更多形状的引导框,本申请对此不作限定。It should be understood that according to the actual capture requirements, the debugging target can also include other types besides the above-mentioned "person" and "vehicle". Correspondingly, more types of guide frames can be set, that is, more shapes of guide frames can be set , which is not limited in this application.
此外,可选地,引导框在拍摄画面上显示的形状也可以是预设的固定的形状,本申请对此不作限定。In addition, optionally, the shape of the guide frame displayed on the shooting screen may also be a preset fixed shape, which is not limited in this application.
示例性地,引导框在拍摄画面上显示的大小可以根据摄像机的分辨率来确定。也就是说,引导框的大小可以根据摄像机的分辨率来调整,这样可以使得引导框在不同摄像机的拍摄画面中的大小都合适。应理解,当摄像机的分辨率越高时,引导框的尺寸应该越大,当摄像机的分辨率越低时,引导框的尺寸应该越小。实际实现时,可以预先设置不同的分辨率与不同引导框大小(尺寸)的对应关系,并根据待调试的摄像机的分辨率在拍摄画面上显示相应大小的引导框。Exemplarily, the size of the guide frame displayed on the shooting screen may be determined according to the resolution of the camera. That is to say, the size of the guide frame can be adjusted according to the resolution of the camera, so that the size of the guide frame in the shooting pictures of different cameras can be all appropriate. It should be understood that when the resolution of the camera is higher, the size of the guide frame should be larger, and when the resolution of the camera is lower, the size of the guide frame should be smaller. In actual implementation, the corresponding relationship between different resolutions and different guide frame sizes (dimensions) can be preset, and guide frames of corresponding sizes can be displayed on the shooting screen according to the resolution of the camera to be debugged.
可选地,引导框在拍摄画面上显示的大小也可以是根据对抓拍目标的像素要求设定的,即引导框的大小可以根据实际场景中不同的抓拍目标的像素要求来调整,这样可以使得在不同的抓拍目标下,引导框都可以满足其像素要求。Optionally, the size of the guide frame displayed on the shooting screen can also be set according to the pixel requirements of the capture target, that is, the size of the guide frame can be adjusted according to the pixel requirements of different capture targets in the actual scene, which can make Under different capture targets, the guide frame can meet its pixel requirements.
可选地,引导框在拍摄画面上显示的大小也可以是预设的固定的大小,本申请对此不作限定。Optionally, the size of the guide frame displayed on the shooting screen may also be a preset fixed size, which is not limited in this application.
示例性地,引导框在拍摄画面上显示的位置可以是根据实际需求预设的位置,例如,可以根据摄像机的聚焦区域来设置引导框显示的位置。具体地,从左右(横向)的维度上看,引导框位于拍摄画面的中间,从上下(纵向)的维度上看,引导框位于拍摄画面的底部,例 如摄像机的聚焦区域(第二区域)为拍摄画面的底部中间,且高度不大于拍摄画面高度的三分之一,更具体地,拍摄画面的纵向的中心线和第二区域的纵向的中心线是重合的,且拍摄画面的底部和第二区域的底部重合,且第二区域在纵向的高度不大于拍摄画面在纵向的高度的三分之一,即将拍摄画面从横向分为三份,第二区域位于最靠近底部的那一份区域中,横向的中间位置。该区域是摄像机最理想的聚焦区域,那么,可以将引导框的位置设置在该区域,从而可以使得摄像机对引导框所在的区域进行聚焦,进而可以使得摄像机拍摄的画面中,引导框所在的区域(即摄像机工作时抓拍目标所在的区域)更加清晰,抓拍目标的图像质量最好,因而可以提升抓拍的效果。例如,图4所示的20。应理解,由于引导框对应的是一个区域,因而引导框在拍摄画面上的显示位置可以以引导框的某个点在拍摄画面上的位置来衡量,例如,可以以引导框的了轮廓上的某个点,或者引导框的中心点(水平中线和垂直中线的交点)来衡量,本申请对此不作限定。Exemplarily, the displayed position of the guide frame on the shooting screen may be a preset position according to actual needs, for example, the displayed position of the guide frame may be set according to the focus area of the camera. Specifically, from the left-right (horizontal) dimension, the guide frame is located in the middle of the shooting picture, and from the up-down (vertical) dimension, the guide frame is located at the bottom of the shooting picture, for example, the focus area (second area) of the camera is The bottom of the shooting screen is in the middle, and the height is not greater than one-third of the height of the shooting screen. More specifically, the longitudinal centerline of the shooting screen and the longitudinal centerline of the second area are coincident, and the bottom of the shooting screen and the second area The bottoms of the two areas overlap, and the vertical height of the second area is not greater than one-third of the vertical height of the shooting screen, that is, the shooting screen is divided into three horizontally, and the second area is located in the area closest to the bottom Middle, the middle position in the horizontal direction. This area is the most ideal focus area of the camera. Then, the position of the guide frame can be set in this area, so that the camera can focus on the area where the guide frame is located, and then the area where the guide frame is located can be made in the picture captured by the camera. (that is, the area where the captured target is located when the camera is working) is clearer, and the image quality of the captured target is the best, thus the effect of capturing can be improved. For example, 20 shown in FIG. 4 . It should be understood that since the guide frame corresponds to an area, the display position of the guide frame on the shooting screen can be measured by the position of a certain point of the guide frame on the shooting screen, for example, the position on the outline of the guide frame can be measured A certain point, or the center point of the guide frame (the intersection of the horizontal midline and the vertical midline), which is not limited in this application.
应理解,上述第二区域所在的位置通常为摄像机在整个拍摄画面中较为理想的聚焦区域,将引导框设置该区域,可以让拍摄画面中抓拍目标的质量较高,因而可以提升抓拍的效果。需要说明的是,根据实际情况的不同,摄像机聚焦的较佳区域也会改变,相应地,第二区域在拍摄画面中的具体位置可以根据实际情况调整,本申请实施例对此不作限制。当然,在有些实现方式下,引导框在拍摄画面上显示的位置可以是根据实际需求预设的其他位置,与摄像机的聚焦区域无关,本申请对此不做限定。结合上面的方案,在一个实际的例子中,调试装置可以显示让用户输入调试标靶的类型、摄像机的分辨率或者要求的引导框的位置的界面,从而可以基于用户的输入分别确定引导框的形状、大小或位置,从而在拍摄画面的相应位置处显示相应形状和大小的引导框。It should be understood that the location of the above second area is generally an ideal focus area of the camera in the entire shooting picture, and setting the guide frame in this area can make the quality of the captured object in the shooting picture higher, thereby improving the effect of capturing. It should be noted that, depending on the actual situation, the preferred area where the camera focuses will also change. Correspondingly, the specific position of the second area in the shooting picture can be adjusted according to the actual situation, which is not limited in this embodiment of the present application. Of course, in some implementation manners, the displayed position of the guide frame on the shooting screen may be other positions preset according to actual needs, regardless of the focus area of the camera, which is not limited in this application. In combination with the above solution, in a practical example, the debugging device can display an interface for the user to input the type of the debugging target, the resolution of the camera, or the position of the required guide frame, so that the position of the guide frame can be determined based on the user's input. shape, size or position, so that a guide frame of corresponding shape and size is displayed at the corresponding position of the shooting screen.
本申请对引导框在拍摄画面上的显示形式不作限定,只要能让用户直观地识别出拍摄画面中的引导框即可。下面对引导框在拍摄画面上几种可能地显示的形式进行介绍。The present application does not limit the display form of the guide frame on the shooting screen, as long as the user can intuitively recognize the guide frame in the shooting screen. Several possible display forms of the guide frame on the shooting screen are introduced below.
方式一:可以通过轮廓线的方式来显示引导框,具体地,可以在拍摄画面上显示引导框的轮廓线,例如,图4中的第二区域20。其中,轮廓线可以为不同的粗细、不同类型以及不同颜色的虚线或实线等,本申请实施例对此不作限制。Manner 1: The guide frame may be displayed in the form of a contour line, specifically, the contour line of the guide frame may be displayed on the shooting screen, for example, the second area 20 in FIG. 4 . Wherein, the contour lines may be dotted lines or solid lines of different thicknesses, types, and colors, etc., which are not limited in this embodiment of the present application.
方式二:可以通过将上述第二区域(即,引导框所在的区域)和第三区域设置不同的透明度的方式来显示引导框。其中,第三区域为拍摄画面中除第二区域之外的区域。如图5所示,调试装置在显示引导框时,可以设置第二区域20和第三区域30的透明度不同。应理解,本申请对第二区域和第三区域的具体的透明度值不作限定,只要用户能通过不同的透明度区分出第二区域和第三区域即可。当第二区域不是完全透明的时,还可以对其填充,例如可以是纯色填充、渐变填充或图案填充等。第三区域也可以类似处理,本申请实施例对此不作限定。例如,在图5中,调试装置以斜线填充了第三区域。Manner 2: The guiding frame may be displayed by setting different transparencies in the second area (ie, the area where the guiding frame is located) and the third area. Wherein, the third area is an area in the shooting frame other than the second area. As shown in FIG. 5 , when the debugging device displays the guide frame, the transparency of the second area 20 and the third area 30 may be set to be different. It should be understood that the present application does not limit the specific transparency values of the second area and the third area, as long as the user can distinguish the second area and the third area through different transparency. When the second area is not completely transparent, it can also be filled, for example, it can be filled with a solid color, filled with a gradient, or filled with a pattern. The third area may also be treated similarly, which is not limited in this embodiment of the present application. For example, in FIG. 5 , the debugging device fills the third area with oblique lines.
可选地,在方式二中,具体通过叠加带有阿尔法通道(alpha channel)透明图片的方式在拍摄画面上显示引导框,以实现第二区域和第三区域不同的透明度,其中,阿尔法通道可以用于调整一张图像的半透明度,摄像机调试系统在拍摄画面上叠加带有alpha通道的透明图像作为引导框。Optionally, in the second method, the guide frame is displayed on the shooting screen by superimposing a transparent picture with an alpha channel (alpha channel), so as to realize different transparency of the second area and the third area, wherein the alpha channel can be Used to adjust the translucency of an image, the camera debugging system superimposes a transparent image with an alpha channel on the shooting screen as a guide frame.
可选地,上述方式一和方式二还可以组合使用,即通过轮廓线的方式显示引导框的同时,还可以将第二区域和第三区域设置不同的透明度,从而进一步增强对用户的引导作用。Optionally, the above method 1 and method 2 can also be used in combination, that is, while displaying the guide frame in the form of outlines, the second area and the third area can also be set with different transparency, so as to further enhance the guiding effect on the user .
应理解,在调试的过程中,引导框(即第二区域)在摄像机的拍摄画面上显示的形状、大小和位置通常保持不变(即,拍摄画面中的内容可以变化,但拍摄画面上显示的引导框保持不变)。因而,上述第一区域(即拍摄画面中调试标靶所在的区域)与第二区域的重合情况 由第一区域的形状、大小和位置来确定。其中,第一区域的形状和位置与摄像机的拍摄角度相关,第一区域的大小与摄像机的拍摄参数相关。具体地,调整摄像机的拍摄角度相当于控制摄像机进行旋转,可以调整摄像机拍摄监控场景的范围,此时可以调整调试标靶在拍摄画面中的区域(即第一区域)的形状和位置。调整摄像机的拍摄参数相当于控制摄像机镜头的光学倍率等参数,可以调整摄像机拍摄监控场景的远近,此时可以调整调试标靶在拍摄画面中的区域(即第一区域)的大小。因而,用户可以通过持续调整摄像机的拍摄角度和/或拍摄参数来增大第一区域和第二区域的重合度,直至该重合度达到用户满意为止。例如,用户看见第一区域和第二区域几乎完全重合,就可以完成调试。示例性的,如图6A所示,假设调试装置在获取到拍摄画面后,在拍摄画面上显示引导框,用户发现调试标靶所在的第一区域10和引导框所在的第二区域20没有重合。然后,经过用户调整摄像机的拍摄角度和/或拍摄参数,如图6B所示,第一区域10和第二区域20几乎完全重合,此时假设用户认为第一区域10和第二区域20的已重合度满足需求,因而可以停止调整摄像机的拍摄角度和/或拍摄参数,完成对摄像机的调试。It should be understood that, in the process of debugging, the shape, size and position of the guide frame (that is, the second area) displayed on the shooting screen of the camera generally remain unchanged (that is, the content in the shooting screen can change, but the display on the shooting screen The bootstrap frame remains the same). Therefore, the overlapping of the above-mentioned first area (that is, the area where the debugging target is located in the shooting picture) and the second area is determined by the shape, size and position of the first area. Wherein, the shape and position of the first area are related to the shooting angle of the camera, and the size of the first area is related to the shooting parameters of the camera. Specifically, adjusting the shooting angle of the camera is equivalent to controlling the camera to rotate, and the range of the camera shooting the monitoring scene can be adjusted. At this time, the shape and position of the area (ie, the first area) of the debugging target in the shooting picture can be adjusted. Adjusting the shooting parameters of the camera is equivalent to controlling parameters such as the optical magnification of the camera lens, and can adjust the distance of the camera shooting the monitoring scene. At this time, the size of the area (ie, the first area) of the debugging target in the shooting picture can be adjusted. Therefore, the user can continuously adjust the shooting angle and/or shooting parameters of the camera to increase the degree of overlap between the first area and the second area until the degree of overlap reaches the user's satisfaction. For example, the user can complete the debugging when he sees that the first area and the second area almost completely overlap. Exemplarily, as shown in FIG. 6A , assuming that the debugging device displays a guide frame on the shooting screen after acquiring the shooting screen, the user finds that the first area 10 where the debugging target is located and the second area 20 where the guide frame is located do not overlap. . Then, after the user adjusts the shooting angle and/or shooting parameters of the camera, as shown in FIG. 6B , the first area 10 and the second area 20 are almost completely overlapped. The degree of coincidence meets the requirements, so the camera shooting angle and/or shooting parameters can be adjusted to complete the debugging of the camera.
此外,在对摄像机的拍摄画面中的抓拍目标的像素有要求的场景下,本申请的上述方案的优势更加明显。例如,在人脸抓拍的场景下,通常要求拍摄画面中人脸的两耳之间像素为120px;在车辆抓拍的场景中,通常要求拍摄画面中车牌的长边像素为120px等等,在这些情况下,现有的调试方案中,用户无法通过肉眼直接判断拍摄画面中的抓拍目标的像素值是否满足要求,因而需要用户反复手动调整摄像机的光学倍率来满足抓拍要求。而如果采用本申请的上述方案,通过在拍摄画面上显示引导框,且设置引导框的大小为满足抓拍要求的大小(例如,水平方向的宽度不小于120px),那么,用户只要调整摄像机使得拍摄画面中的调试标靶与引导框基本重合,就可以让该摄像机的拍摄画面中的抓拍目标的像素值满足要求。In addition, the advantages of the above-mentioned solution of the present application are more obvious in the scene where there is a requirement for the pixels of the captured object in the shooting picture of the camera. For example, in the scene of face capture, it is usually required that the pixels between the two ears of the face in the captured picture be 120px; Under such circumstances, in the existing debugging scheme, the user cannot directly judge whether the pixel value of the capture target in the shooting screen meets the requirements with the naked eye, so the user needs to manually adjust the optical magnification of the camera repeatedly to meet the capture requirements. And if the above-mentioned solution of the present application is adopted, by displaying the guide frame on the shooting screen, and setting the size of the guide frame to meet the capture requirements (for example, the width in the horizontal direction is not less than 120px), then the user only needs to adjust the camera to make the shooting The debugging target in the screen basically coincides with the guide frame, so that the pixel value of the capture target in the shooting screen of the camera can meet the requirements.
由于摄像机通常基于预设的聚焦算法进行聚焦,该聚焦算法通常以拍摄画面的中心区域作为聚焦的高权重区域,这样做的结果是使得拍摄画面的中心区域的画面质量更好。然而,实际情况下,引导框所在的第二区域可能并不在画面的中心区域。此外,在其他一些情况下,摄像机聚焦的位置也可能在背景或其他用户不关心的区域。例如,在低照度的环境下,为了获得更好的拍摄效果,摄像机通常以大光圈、浅景深的方式进行拍摄,这样的拍摄方式会可能导致摄像机聚焦在拍摄画面的背景和其他用户不关心的区域中。鉴于以上两种情况,可选地,在本申请提供的实施方式中,调试装置还可以调整摄像机的聚焦区域,让摄像机将第二区域作为聚焦区域,以使第二区域的清晰度高于第三区域的清晰度。应理解,当完成调试后,引导框所在的第二区域为抓拍目标所在的区域,因而,通过这种方式,可以使摄像机在工作时得到的拍摄画面中,抓拍目标所在区域的清晰度更高,因而可以提升抓拍的效果,并可以提升后续对该抓拍目标进行智能分析的准确性。Since the camera usually focuses based on a preset focusing algorithm, the focusing algorithm usually takes the central area of the shooting picture as a high-weight area for focusing, and the result of this is to make the picture quality of the central area of the shooting picture better. However, in actual situations, the second area where the guide frame is located may not be in the central area of the screen. Also, in some other cases, the camera may focus on the background or other areas that the user does not care about. For example, in a low-light environment, in order to obtain better shooting results, the camera usually shoots with a large aperture and a shallow depth of field. This shooting method may cause the camera to focus on the background of the shooting picture and other things that the user does not care about. in the area. In view of the above two situations, optionally, in the embodiment provided by this application, the debugging device can also adjust the focus area of the camera, so that the camera uses the second area as the focus area, so that the definition of the second area is higher than that of the first area. Three-zone clarity. It should be understood that after the debugging is completed, the second area where the guide frame is located is the area where the capture target is located. Therefore, in this way, the resolution of the area where the capture target is located in the shooting picture obtained by the camera during operation can be higher. , so that the capture effect can be improved, and the accuracy of subsequent intelligent analysis of the capture target can be improved.
可选地,用户也可以人工确认聚焦区域,在用户认为第一区域的重合度和第二区域的重合度满足要求后,由用户触发摄像机聚焦,或者由调试装置自动触发摄像机聚焦,本申请实施例对此不作限制,摄像机默认聚焦区域为第二区域,当用户确定摄像机聚焦后调试标靶的清晰度满足用户的需求后,用户可以通过调试装置确认摄像机的聚焦区域为当前的聚焦区域,用户确认聚焦区域后,摄像机在后续的工作过程中,摄像机都以用户确认的聚焦区域作为聚焦区域进行拍摄或抓拍,若摄像机聚焦后调试标靶的清晰度不满足用户的需求,用户也可以通过调试装置手动调整当前的聚焦区域后再确认摄像机的聚焦区域。Optionally, the user can manually confirm the focus area. After the user thinks that the coincidence degree of the first area and the coincidence degree of the second area meet the requirements, the user triggers the camera to focus, or the debugging device automatically triggers the camera to focus. This application implements For example, this is not limited. The default focus area of the camera is the second area. After the user confirms that the definition of the debugging target meets the user's needs after the camera is focused, the user can confirm that the focus area of the camera is the current focus area through the debugging device. After confirming the focus area, the camera will use the focus area confirmed by the user as the focus area to shoot or capture during the subsequent working process of the camera. If the definition of the debugging target does not meet the user's needs after the camera is focused, the user can also use the debugging The device manually adjusts the current focus area and then confirms the focus area of the camera.
需要说明是,在上述实现过程中,当调试装置是不同于摄像机的其他设备时,调试装置还可以将确定的聚焦区域发送给摄像机,从而让摄像机在工作时根据该聚焦区域进行抓拍, 从而得到让抓拍目标清晰的抓拍画面。It should be noted that, in the above implementation process, when the debugging device is other equipment than the camera, the debugging device can also send the determined focus area to the camera, so that the camera can capture pictures according to the focus area during operation, thus obtaining Make the capture target clear and capture the picture.
可选地,在聚焦区域确定后,用户可以通过调试装置确定摄像机在拍摄画面中的抓拍范围,摄像机默认的抓拍范围为整个拍摄画面,但这样存在抓拍目标不在聚焦区域时摄像机也进行抓拍的情况,得到的抓拍结果质量会比较差,因此用户可以通过调试装置绘制或调整摄像机在拍摄画面中的抓拍范围,例如用户设置抓拍范围为拍摄画面中的最靠近底部的三分之一区域,又例如用户设置抓拍范围就为第二区域,当抓拍目标在拍摄画面中的抓拍范围内,摄像机才进行抓拍,这样得到的抓拍结果的质量可以明显提升。Optionally, after the focus area is determined, the user can determine the capture range of the camera in the shooting screen through the debugging device. The default capture range of the camera is the entire shooting screen, but there are cases where the camera also captures when the capture target is not in the focus area. , the quality of the capture result will be relatively poor, so the user can draw or adjust the capture range of the camera in the shooting screen through the debugging device, for example, the user sets the capture range to the bottom third of the shooting screen, and for example The capture range set by the user is the second area. When the capture target is within the capture range in the shooting screen, the camera will start capturing, so that the quality of the capture result can be significantly improved.
此外,由于摄像机在工作时,可能需要将拍摄的画面压缩后发送给其他设备,例如,摄像机可能对拍摄画面采用JPEG(joint photographic experts group)方式编码,或者采用H.264等标准编码后发送到服务器上存储起来。因而,同样可选地,调试装置还可以确定摄像机的编码参数,从而让摄像机在后续工作的过程中,在对拍摄画面进行编码时,将引导框所在的区域(即第二区域)作为编码的感兴趣区域(region of interest,ROI),使得编码后的对应的图像中,第二区域的清晰度高于其他区域的清晰度。在这种实现方式下,上述编码参数可以是编码的ROI区域的指示信息;并且,当调试装置是不同于摄像机的其他装置时,调试装置还可以将该编码参数发送给摄像机,从而让摄像机在工作时根据该编码参数进行编码。应理解,通过这种方式,同样可以使得抓拍目标所在区域的清晰度更高,因而能够提升抓拍的效果,并可以提升后续对该抓拍目标进行智能分析的准确性。In addition, when the camera is working, it may need to compress the captured pictures and send them to other devices. For example, the camera may encode the captured pictures in JPEG (joint photographic experts group) mode, or use standard encoding such as H.264 and send them to stored on the server. Therefore, also optionally, the debugging device can also determine the encoding parameters of the camera, so that the camera can use the area where the guide frame is located (that is, the second area) as the encoding parameter when encoding the shooting picture during subsequent work. A region of interest (region of interest, ROI), so that in the encoded corresponding image, the definition of the second region is higher than the definition of other regions. In this implementation mode, the above encoding parameters may be the indication information of the encoded ROI area; and, when the debugging device is a device other than the camera, the debugging device may also send the encoding parameters to the camera, so that the camera is Encode according to this encoding parameter during work. It should be understood that, in this way, the definition of the area where the captured target is located can also be made higher, thus the effect of capturing can be improved, and the accuracy of subsequent intelligent analysis of the captured target can be improved.
下面结合图7-图8对本申请提供的调试摄像机的方法的另一种实施方式进行介绍。Another implementation of the method for debugging a camera provided by the present application will be introduced below with reference to FIGS. 7-8 .
如图7所示,该调试摄像机的方法包括但不限于如下步骤:As shown in Figure 7, the method for debugging the camera includes but is not limited to the following steps:
S701、获取摄像机拍摄监控场景得到的拍摄画面。S701. Acquire a shooting picture obtained by shooting a monitoring scene by a camera.
请参考上述S301中的相关描述,不再赘述。Please refer to the relevant description in S301 above, and details will not be repeated here.
S702、响应于第一指令,在拍摄画面上显示引导框。S702. Display a guide frame on the shooting screen in response to the first instruction.
如图8所示,获取到拍摄画面后,可以在用户触发进行调试模式后,再在拍摄画面上显示引导框。示例性地,调试装置包括显示屏,显示屏中可以显示调试界面和拍摄画面框,其中拍摄画面框设置在显示屏的左侧,调试界面设置在显示屏的右侧,当调试装置获取到拍摄画面后,拍摄画面就显示在拍摄画面框内。用户在调试界面中点击开始调试的按键,这时相当于用户发出了第一指令。调试装置可以响应用户指示进入调试模式的第一指令,开始在拍摄画面上显示引导框。其中,在拍摄画面上显示引导框的方式可以参考前述S302的介绍,不再赘述。As shown in FIG. 8 , after the shooting screen is acquired, the guide frame may be displayed on the shooting screen after the user triggers the debugging mode. Exemplarily, the debugging device includes a display screen, and a debugging interface and a shooting frame can be displayed on the display screen, wherein the shooting frame frame is set on the left side of the display screen, and the debugging interface is set on the right side of the display screen. After the screen is displayed, the shooting screen is displayed in the shooting screen frame. When the user clicks the button to start debugging in the debugging interface, it is equivalent to sending the first instruction by the user. The debugging device may respond to the user's first instruction indicating to enter the debugging mode, and start displaying the guide frame on the shooting screen. For the manner of displaying the guide frame on the shooting screen, reference may be made to the introduction of S302 above, and details will not be repeated here.
需要说明的是,显示屏显示调试界面和拍摄画面的方式和形式有多种,例如调试界面设置在显示屏的上方,拍摄画面框设置在显示屏的下方,又例如用户在调试界面中点击开始调试的按键(虚拟按键)后,才开始在拍摄画面框中显示拍摄画面,并同时显示引导框,再例如用户在调试界面中点击开始调试的按键后,拍摄画面框开始全屏显示,用户可以通过显示屏上的返回按键退出,以重新显示出调节界面,本申请实施例对此不作限制。It should be noted that there are many ways and forms for the display screen to display the debugging interface and the shooting picture. After the debugging button (virtual button) is pressed, the shooting picture will start to be displayed in the shooting picture frame, and the guide frame will be displayed at the same time. The return button on the display screen exits to re-display the adjustment interface, which is not limited in this embodiment of the present application.
S703、检测第一区域和第二区域的重合度,并输出该重合度。S703. Detect the overlapping degree of the first area and the second area, and output the overlapping degree.
在拍摄画面上显示引导框后,用户可以调整摄像机的拍摄角度和/或拍摄参数。具体的,用户可以调整摄像机的拍摄角度和拍摄参数,例如,移动摄像机来调整拍摄角度,控制摄像机的调节按键来调整拍摄参数,而为了更方便用户的调试,用户也可以在调试装置中输入摄像机需要调整的拍摄角度和拍摄参数,调试装置将这些数据传输给摄像机,摄像机根据这些数据自动调整拍摄角度和拍摄参数。在该过程中,调试装置可以实时监测第一区域和第二区域的重合度,并输出该重合度。After the guide frame is displayed on the shooting screen, the user can adjust the shooting angle and/or shooting parameters of the camera. Specifically, the user can adjust the shooting angle and shooting parameters of the camera, for example, move the camera to adjust the shooting angle, control the adjustment buttons of the camera to adjust the shooting parameters, and in order to make debugging more convenient for the user, the user can also input the The shooting angle and shooting parameters need to be adjusted, and the debugging device transmits these data to the camera, and the camera automatically adjusts the shooting angle and shooting parameters according to these data. During this process, the debugging device can monitor the coincidence degree of the first area and the second area in real time, and output the coincidence degree.
具体地,调试装置可以识别出第一区域。然后,示例性地,可以根据第一区域在第二区域中的面积占比以及第二区域在第一区域中的面积占比来确定第一区域和第二区域的重合度,并基于此实时输出重合度。由于重合度将第一区域和第二区域的重合或匹配情况进行了量化,可以较为精确地反映该两个区域的重合情况,因而在该种可能的实现方式中,可以让用户方便地知道该两个区域的精确的重合情况,并可以让用户根据该重合度的变化情况进行调试,因而可以进一步提升调试的效率,且可以提升用户体验。Specifically, the debugging device can identify the first area. Then, for example, according to the area proportion of the first area in the second area and the area proportion of the second area in the first area, the degree of overlap between the first area and the second area can be determined, and based on this, real-time Output coincidence. Since the degree of overlap quantifies the overlap or matching of the first area and the second area, it can reflect the overlap of the two areas more accurately. Therefore, in this possible implementation, the user can conveniently know the The precise coincidence of the two regions allows the user to debug according to the change of the coincidence, thus further improving the efficiency of debugging and improving the user experience.
需要说明的是,还可以通过其他方式计算第一区域和第二区域的重合度,本申请对此不作限定。It should be noted that the coincidence degree between the first region and the second region may also be calculated in other ways, which is not limited in the present application.
可选的,该调试装置还设置有音响,该重合度可以以语音形式播放输出,或者以文字形式显示输出,又或者以该两种形式同时输出,用户在调试摄像机的过程中可以直观的感受到当前的重合度,便于用户调试,提高了用户体验。Optionally, the debugging device is also equipped with a sound, and the coincidence degree can be played and output in the form of voice, or displayed in the form of text, or output in both forms at the same time, so that the user can intuitively feel it in the process of debugging the camera. To the current coincidence degree, it is convenient for users to debug and improves user experience.
其中,当重合度以文字形式输出时,重合度可以在拍摄画面上显示,也可以在拍摄画面之外的界面上显示,例如在拍摄画面上第二区域的上方显示,或者在调试界面中显示,重合度可以通过各种字体、颜色或大小的文字显示,或者通过柱状图来显示重合度所表示的百分比,本申请实施例对此不作限制。Among them, when the coincidence degree is output in the form of text, the coincidence degree can be displayed on the shooting screen or on an interface outside the shooting screen, for example, displayed above the second area on the shooting screen, or displayed in the debugging interface , the coincidence degree can be displayed by text in various fonts, colors, or sizes, or the percentage represented by the coincidence degree can be displayed through a bar graph, which is not limited in this embodiment of the present application.
S704、在重合度不高于第一阈值的情况下,输出第一提示信息。S704. In a case where the overlap degree is not higher than the first threshold, output first prompt information.
当重合度不高于第一阈值时,说明调试标靶和引导框的差距还比较明显,需要用户调整摄像机的拍摄角度和/或拍摄参数,此时输出第一提示信息,提示用户调整摄像机的拍摄角度和/或拍摄参数,提高了对用户调试摄像机的引导作用,提升了用户体验。When the coincidence degree is not higher than the first threshold, it means that the gap between the debugging target and the guide frame is still relatively obvious, and the user needs to adjust the shooting angle and/or shooting parameters of the camera. At this time, the first prompt message is output to prompt the user to adjust the shooting angle of the camera. The shooting angle and/or shooting parameters improve the guiding function for the user to debug the camera and improve the user experience.
可选地,该第一提示信息可以以语音形式播放输出,或者以文字形式显示输出,用户在调试摄像机的过程中可以直观的感受到第一提示信息,便于用户调试,提高了用户体验。Optionally, the first prompt information can be played and output in the form of voice, or displayed and output in the form of text, and the user can intuitively feel the first prompt information during the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
可选的,在重合度小于第一阈值的情况下,调试装置还可以对用户进行更加细致和有针对性的提示。具体地,可以设置第二阈值(第二阈值小于第一阈值),并在重合度小于该第二阈值的情况下,第一提示信息可以是用于提示用户调整摄像机的拍摄角度的信息;而在重合度大于第二阈值且小于第一阈值的情况下,第一提示可以是用于提示用户调整摄像机的拍摄参数的信息。其中,上述第一阈值和第二阈值可以是预设的值,本申请对此不作限定。Optionally, when the coincidence degree is less than the first threshold, the debugging device can also provide more detailed and targeted prompts to the user. Specifically, a second threshold can be set (the second threshold is less than the first threshold), and when the degree of overlap is less than the second threshold, the first prompt information can be information for prompting the user to adjust the shooting angle of the camera; and In a case where the coincidence degree is greater than the second threshold and less than the first threshold, the first prompt may be information for prompting the user to adjust shooting parameters of the camera. Wherein, the above-mentioned first threshold and second threshold may be preset values, which are not limited in this application.
示例性的,假设第一阈值为90%,第二阈值为70%,那么,在用户开始调试摄像机时,可能调试标靶距离引导框较远,因而调试装置检测到第一区域和第二区域的重合度小于70%。此时,调试装置可以输出提示用户调整摄像机的拍摄角度的第一提示信息。例如,在调试界面中显示“请调整摄像机的拍摄角度,使调试标靶与引导框的位置接近”,并同时播放“请调整摄像机的拍摄角度,使调试标靶与引导框的位置接近”的语音。For example, assuming that the first threshold is 90% and the second threshold is 70%, then when the user starts to debug the camera, the debugging target may be far away from the guide frame, so the debugging device detects the first area and the second area The coincidence degree is less than 70%. At this time, the debugging device may output first prompt information prompting the user to adjust the shooting angle of the camera. For example, in the debugging interface, it displays "Please adjust the shooting angle of the camera so that the position of the debugging target is close to the guide frame", and at the same time play the video "Please adjust the shooting angle of the camera so that the position of the debugging target is close to the guiding frame". voice.
然后,当用户调整了摄像机的拍摄角度后,使得调试装置相应检测到第一区域和第二区域的重合度大于或等于70%但小于或等于90%时,说明调试标靶与引导框在拍摄画面中的位置和形状可能比较接近,但调试标靶的大小可能与引导框有一定的差异。在这种情况下,调试装置可以输出提示用户调整摄像机的拍摄参数的第一提示信息。例如,在调试界面中显示“请调整摄像机的拍摄参数,使调试标靶与引导框重合”,并同时播放“请调整摄像机的拍摄参数,使调试标靶与引导框重合”的语音。此时,用户在看到该提示后,可以调整摄像机镜头的光学倍率,使调试标靶的大小更加匹配引导框的大小。应理解,进一步地,当调试装置检测到第一区域和第二区域的重合度高于90%时,执行步骤S706。Then, when the user adjusts the shooting angle of the camera so that the debugging device detects that the coincidence degree between the first area and the second area is greater than or equal to 70% but less than or equal to 90%, it means that the debugging target and the guide frame are shooting The position and shape in the screen may be relatively close, but the size of the debugging target may be different from the guide frame. In this case, the debugging device may output first prompt information prompting the user to adjust the shooting parameters of the camera. For example, "Please adjust the shooting parameters of the camera so that the debugging target coincides with the guiding frame" is displayed in the debugging interface, and at the same time the voice of "Please adjust the shooting parameters of the camera so that the debugging target coincides with the guiding frame" is played. At this point, after seeing the prompt, the user can adjust the optical magnification of the camera lens so that the size of the debugging target better matches the size of the guide frame. It should be understood that, further, when the debugging device detects that the coincidence degree of the first area and the second area is higher than 90%, step S706 is executed.
S705、在重合度高于第一阈值的情况下,输出第二提示信息。S705. In the case that the degree of coincidence is higher than the first threshold, output second prompt information.
可选地,该第二提示信息可以以语音形式播放输出,或者以文字形式显示输出,用户在 调试摄像机的过程中可以直观的感受到第二提示信息,便于用户调试,提高了用户体验。Optionally, the second prompt information can be played and output in the form of voice, or displayed and output in text form, and the user can intuitively feel the second prompt information in the process of debugging the camera, which is convenient for the user to debug and improves the user experience.
当重合度高于第一阈值时,通常说明调试标靶和引导框的重合情况达到预期的需求,因而此时,调试装置可以输出第二提示信息,例如在调试界面中显示“调试已完成”,并同时播放“调试已完成”的语音,通过第二提示信息指示已完成摄像机的调试,提高了对用户调试摄像机的引导作用,提升了用户体验。When the coincidence degree is higher than the first threshold, it usually indicates that the coincidence of the debugging target and the guide frame meets the expected requirements. Therefore, at this time, the debugging device can output a second prompt message, for example, display "commissioning completed" in the debugging interface , and at the same time play the voice of "debugging completed", the second prompt information indicates that the debugging of the camera has been completed, which improves the guiding function for the user to debug the camera and improves the user experience.
S706、响应于第二指令,停止在拍摄画面上显示引导框。S706. Stop displaying the guide frame on the shooting screen in response to the second instruction.
当调试装置输出第二提示信息时,用户可以在拍摄画面中观察此时调试标靶和引导框的关系,当用户认为当前的调试标靶刚好与引导框重合,或者满足其需求时,可以在调试装置中确定停止调试,例如调试装置输出第二提示信息后,用户在调试界面中点击停止调试的按键,这时相当于用户发出了第二指令,调试装置响应用于指示退出调试模式的第二指令,停止在拍摄画面上显示引导框,完成摄像机的调试。当调试装置输出第二提示信息时,用户还可以根据观察的情况继续调试,直到满足其需求,不再赘述。When the debugging device outputs the second prompt information, the user can observe the relationship between the debugging target and the guide frame in the shooting screen at this time. It is determined to stop debugging in the debugging device. For example, after the debugging device outputs the second prompt message, the user clicks the button to stop debugging in the debugging interface. The second command is to stop displaying the guide frame on the shooting screen and complete the debugging of the camera. When the debugging device outputs the second prompt information, the user can also continue debugging according to the observed situation until the user's requirement is met, and details will not be repeated here.
需要说明的是,用户可以在任意情况下点击停止调试的按键,使调试装置退出调试模块,此时调试装置会控制摄像机保存当前的拍摄参数和拍摄角度,以便用户后续回到调试模式时,可以继续上次的调试工作。It should be noted that the user can click the button to stop debugging under any circumstances to make the debugging device exit the debugging module. At this time, the debugging device will control the camera to save the current shooting parameters and shooting angles, so that when the user returns to the debugging mode later, he can Continue the last debugging work.
本申请实施例中,调试装置通过对调试标靶对应的第一区域和引导框对应的第二区域的重合度进行检测,且可以以多种方式输出该重合度;并且,调试装置根据不同的重合度输出不同的提示信息,因而相比图3所示的实施方式,本实施方式可以进一步增强对用户调试的引导作用,因而可以进一步提高用户调试摄像机的效率,提升用户体验。In the embodiment of the present application, the debugging device detects the degree of coincidence between the first region corresponding to the debugging target and the second region corresponding to the guide frame, and can output the degree of coincidence in various ways; The coincidence degree outputs different prompt information. Therefore, compared with the embodiment shown in FIG. 3 , this embodiment can further enhance the guiding function for user debugging, thereby further improving the efficiency of user debugging of cameras and improving user experience.
应理解,类似图3所示的实施方式,在图7所示的实施方式中,调试装置也可以确定摄像机的聚焦区域和/或编码参数,从而让摄像机拍摄的画面和编码后的画面中,引导框所在的区域更加清晰,从而可以提升抓拍的质量,不再赘述。It should be understood that, similar to the implementation shown in FIG. 3 , in the implementation shown in FIG. 7 , the debugging device may also determine the focus area and/or encoding parameters of the camera, so that in the picture captured by the camera and the encoded picture, The area where the guide frame is located is clearer, which can improve the quality of the capture, so I won’t repeat it here.
同样类似图3所示的实施方式,图7所示的实施方式也可以应用在存在多种抓拍目标的场景,不再赘述。Also similar to the implementation manner shown in FIG. 3 , the implementation manner shown in FIG. 7 can also be applied to a scene where there are multiple capture objects, so details will not be repeated here.
本申请实施例还提供能够实施上述调试摄像机的方法的调试摄像机的装置(简称调试装置),下面结合图9-图11进行介绍。The embodiment of the present application also provides a device for debugging a camera (referred to as a debugging device) capable of implementing the above method for debugging a camera, which will be introduced below with reference to FIGS. 9-11 .
请参阅图9,为本申请提供的调试装置900的结构示意图,该调试装置900包括:Please refer to FIG. 9, which is a schematic structural diagram of a debugging device 900 provided by the present application. The debugging device 900 includes:
获取模块901,用于获取摄像机拍摄监控场景得到的拍摄画面,拍摄画面中包括调试标靶,调试标靶为设置于监控场景中的、用于调试摄像机的拍摄目标;该获取单元901可以执行上述调试摄像机的方法中的步骤S301或步骤S701。The obtaining module 901 is used to obtain the shooting picture obtained by shooting the monitoring scene by the camera. The shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the obtaining unit 901 can execute the above-mentioned Step S301 or step S701 in the method for debugging a camera.
显示模块902,用于在拍摄画面上显示引导框,引导框用于引导用户调整摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足用户的需求,第一区域为拍摄画面中调试标靶所在的区域,第二区域为拍摄画面中引导框所在的区域。该显示模块902可以执行上述方法实施例中的步骤S302。该显示模块902可以对应上述方法实施例中的显示屏。The display module 902 is configured to display a guide frame on the shooting screen, and the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping degree of the first area and the second area meets the needs of the user. The area is the area where the debugging target is located in the shooting screen, and the second area is the area where the guide frame is located in the shooting screen. The display module 902 may execute step S302 in the foregoing method embodiments. The display module 902 may correspond to the display screen in the foregoing method embodiments.
本申请实施例中,显示模块902在摄像机的拍摄画面上显示引导框,其中引导框用于引导用户调整摄像机的拍摄角度和/或拍摄参数,以使调试标靶尽可能位于引导框内,从而在保证调试效率和拍摄效果的情况下,低成本的实现了摄像机的调试。In the embodiment of the present application, the display module 902 displays a guide frame on the shooting screen of the camera, wherein the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the debugging target is located in the guide frame as much as possible, so that Under the condition of ensuring debugging efficiency and shooting effect, camera debugging is realized at low cost.
请参阅图10,本申请提供的调试装置1000的结构示意图,该调试装置1000包括:Please refer to FIG. 10 , which is a schematic structural diagram of a debugging device 1000 provided by the present application. The debugging device 1000 includes:
获取模块1001,用于获取摄像机拍摄监控场景得到的拍摄画面,拍摄画面中包括调试标靶,调试标靶为设置于监控场景中的、用于调试摄像机的拍摄目标;该获取模块1001可以执行上述方法实施例中的步骤S301或步骤S701。The obtaining module 1001 is used to obtain the shooting picture obtained by shooting the monitoring scene by the camera. The shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera; the obtaining module 1001 can execute the above-mentioned Step S301 or step S701 in the method embodiment.
显示模块1002,用于在拍摄画面上显示引导框,引导框用于引导用户调整摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足用户的需求,第一区域为拍摄画面中调试标靶所在的区域,第二区域为拍摄画面中引导框所在的区域。该显示模块1002可以执行上述方法实施例中的步骤S302或步骤S702。该显示模块1002可以对应上述方法实施例中的显示屏。The display module 1002 is configured to display a guide frame on the shooting screen, and the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera so that the overlapping degree of the first area and the second area meets the needs of the user. The area is the area where the debugging target is located in the shooting screen, and the second area is the area where the guide frame is located in the shooting screen. The display module 1002 may execute step S302 or step S702 in the above method embodiments. The display module 1002 may correspond to the display screen in the foregoing method embodiments.
可选的,获取模块1001还用于获取调试标靶的标靶类型;显示模块1002具体用于在拍摄画面上显示引导框,引导框的类型基于标靶类型确定。Optionally, the acquisition module 1001 is further configured to acquire the target type of the debugging target; the display module 1002 is specifically configured to display a guide frame on the shooting screen, and the type of the guide frame is determined based on the target type.
可选的,获取模块1001还用于获取摄像机的分辨率;显示模块1002具体还用于在拍摄画面上显示引导框,引导框的大小基于分辨率确定。Optionally, the acquiring module 1001 is further configured to acquire the resolution of the camera; the display module 1002 is further configured to display a guide frame on the shooting screen, and the size of the guide frame is determined based on the resolution.
可选的,第二区域和第三区域的透明度不同,第三区域为拍摄画面中除第二区域之外的区域。Optionally, the transparency of the second area and the third area are different, and the third area is an area in the shooting frame other than the second area.
可选的,显示模块1002具体用于在拍摄画面上显示第二区域的轮廓。Optionally, the display module 1002 is specifically configured to display the outline of the second area on the shooting screen.
可选的,该摄像机调试系统1000还包括:确定模块1003,用于确定摄像机的聚焦区域和/或编码参数,以使第二区域的清晰度高于第三区域的清晰度,第三区域为拍摄画面中除第二区域之外的区域。Optionally, the camera debugging system 1000 further includes: a determination module 1003, configured to determine the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than that of the third area, and the third area is Capture the area of the frame other than the second area.
可选的,该摄像机调试系统1000还包括:检测模块1004,用于检测第一区域和第二区域的重合度;该检测模块1004可以执行上述方法实施例中的步骤S703。显示模块1002还用于输出重合度。该显示模块1002可以执行上述方法实施例中的步骤S703。该显示模块1002可以对应上述方法实施例中的显示屏和音响。Optionally, the camera debugging system 1000 further includes: a detection module 1004, configured to detect the degree of overlap between the first area and the second area; the detection module 1004 can execute step S703 in the above method embodiment. The display module 1002 is also used to output the coincidence degree. The display module 1002 can execute step S703 in the above method embodiment. The display module 1002 may correspond to the display screen and sound in the above method embodiments.
可选的,显示模块1002还用于在重合度不高于第一阈值的情况下,输出第一提示信息,第一提示信息用于提示用户调整摄像机的拍摄角度和/或拍摄参数。该显示模块1002可以执行上述方法实施例中的步骤S704。该显示模块1002可以对应上述方法实施例中的显示屏和音响。Optionally, the display module 1002 is further configured to output first prompt information when the coincidence degree is not higher than the first threshold, and the first prompt information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera. The display module 1002 may execute step S704 in the above method embodiment. The display module 1002 may correspond to the display screen and sound in the above method embodiments.
可选的,显示模块1002还用于在重合度高于第一阈值的情况下,输出第二提示信息,第二提示信息用于指示已完成摄像机的调试。该显示模块1002可以执行上述方法实施例中的步骤S705。该显示模块1002可以对应上述方法实施例中的显示屏和音响。Optionally, the display module 1002 is further configured to output second prompt information when the coincidence degree is higher than the first threshold, and the second prompt information is used to indicate that the debugging of the camera has been completed. The display module 1002 may execute step S705 in the above method embodiment. The display module 1002 may correspond to the display screen and sound in the above method embodiments.
可选的,显示模块1002具体还用于响应于第一指令,在拍摄画面上显示引导框,第一指令用于指示进入调试模式。该显示模块1002可以执行上述方法实施例中的步骤S702。Optionally, the display module 1002 is specifically further configured to display a guide frame on the shooting screen in response to a first instruction, where the first instruction is used to indicate entering the debugging mode. The display module 1002 can execute step S702 in the above method embodiment.
可选的,显示模块1002具体还用于响应于第二指令,停止在拍摄画面上显示引导框,第二指令用于指示退出调试模式。该显示模块1002可以执行上述方法实施例中的步骤S706。Optionally, the display module 1002 is specifically further configured to stop displaying the guide frame on the shooting screen in response to a second instruction, the second instruction being used to instruct to exit the debugging mode. The display module 1002 may execute step S706 in the above method embodiment.
可选的,第一区域的位置或形状与拍摄角度相关,第一区域的大小与拍摄参数相关。Optionally, the position or shape of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameters.
可选的,标靶类型为人或车。Optionally, the target type is human or vehicle.
应理解,上述调试装置900或调试装置1000的具体实现细节和相应的有益效果可以参阅前述调试摄像机的方法中的相应内容进行理解,此处不再重复赘述。It should be understood that the specific implementation details and corresponding beneficial effects of the debugging device 900 or the debugging device 1000 can be understood by referring to the corresponding content in the aforementioned method for debugging a camera, and will not be repeated here.
如图11所示,为本申请提供的调试装置1100的结构示意图。该调试装置1100包括:处理器1101、通信接口1102、存储器1103以及总线1104,该处理器1101可以包括CPU,或者,CPU与GPU和NPU和其他类型的处理器中的至少一个。处理器1101、通信接口1102以及存储器1103通过总线1104相互连接。在本申请的实施例中,处理器1101用于对计算机设备1100的动作进行控制管理,例如,处理器1101用于执行上述调试摄像机的方法和/或用于本文所描述的技术的其他过程。通信接口1102用于支持计算机设备1100进行通信。存储器1103,用于存储计算机设备110的程序代码和数据。As shown in FIG. 11 , it is a schematic structural diagram of a debugging device 1100 provided in this application. The debugging device 1100 includes: a processor 1101, a communication interface 1102, a memory 1103, and a bus 1104. The processor 1101 may include a CPU, or at least one of the CPU, GPU, NPU, and other types of processors. The processor 1101 , the communication interface 1102 and the memory 1103 are connected to each other through a bus 1104 . In the embodiment of the present application, the processor 1101 is used to control and manage the actions of the computer device 1100, for example, the processor 1101 is used to execute the above-mentioned method for debugging a camera and/or other processes for the technologies described herein. The communication interface 1102 is used to support the computer device 1100 to communicate. The memory 1103 is used for storing program codes and data of the computer device 110 .
其中,处理器1101可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电 路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。总线1104可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图11中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。Wherein, the processor 1101 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It can implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure. The processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and the like. The bus 1104 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc. The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 11 , but it does not mean that there is only one bus or one type of bus.
本申请还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当设备的至少一个处理器执行该计算机执行指令时,设备执行上述实施例所描述的调试摄像机的方法。The present application also provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when at least one processor of the device executes the computer-executable instructions, the device executes the method for debugging the camera described in the above-mentioned embodiments .
本申请还提供一种计算机程序产品,该计算机程序产品包括计算机执行指令,该计算机执行指令存储在计算机可读存储介质中;设备的至少一个处理器可以从计算机可读存储介质读取该计算机执行指令,至少一个处理器执行该计算机执行指令使得设备执行上述实施例所描述的调试摄像机的方法。The present application also provides a computer program product, which includes computer-executable instructions stored in a computer-readable storage medium; at least one processor of the device can read the computer-executable instructions from the computer-readable storage medium. Instructions, at least one processor executes the computer-executed instructions so that the device executes the method for debugging a camera described in the foregoing embodiments.
本申请还提供一种芯片系统,该芯片系统包括至少一个处理器和接口,该接口用于接收数据和/或信号,至少一个处理器用于支持实现上述实施例所描述的调试摄像机的方法。在一种可能的设计中,芯片系统还可以包括存储器,存储器,用于保存计算机设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。The present application also provides a chip system, the chip system includes at least one processor and an interface, the interface is used to receive data and/or signals, and the at least one processor is used to support the implementation of the method for debugging the camera described in the above embodiments. In a possible design, the system-on-a-chip may further include a memory, and the memory is used for storing necessary program instructions and data of the computer device. The system-on-a-chip may consist of chips, or may include chips and other discrete devices.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device and method can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,read-only memory)、随机存取存储器(RAM,random access memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, read-only memory), random access memory (RAM, random access memory), magnetic disk or optical disc, etc., which can store program codes. .

Claims (30)

  1. 一种调试摄像机的方法,其特征在于,包括:A method for debugging a video camera, comprising:
    获取摄像机拍摄监控场景得到的拍摄画面,所述拍摄画面中包括调试标靶,所述调试标靶为设置于所述监控场景中的、用于调试所述摄像机的拍摄目标;Obtaining a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera;
    在所述拍摄画面上显示引导框,所述引导框用于引导用户调整所述摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足所述用户的需求,所述第一区域为所述拍摄画面中所述调试标靶所在的区域,所述第二区域为所述拍摄画面中所述引导框所在的区域。Displaying a guide frame on the shooting screen, the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the overlapping degree of the first area and the second area meets the user's needs, The first area is the area where the debugging target is located in the shooting picture, and the second area is the area where the guide frame is located in the shooting picture.
  2. 根据权利要求1所述的方法,其特征在于,所述在所述拍摄画面上显示引导框包括:The method according to claim 1, wherein the displaying a guide frame on the shooting screen comprises:
    获取所述调试标靶的标靶类型;Obtain the target type of the debugging target;
    在所述拍摄画面上显示引导框,所述引导框的类型基于所述标靶类型确定。A guide frame is displayed on the shooting screen, and the type of the guide frame is determined based on the target type.
  3. 根据权利要求1或2所述的方法,其特征在于,所述在所述拍摄画面上显示引导框包括:The method according to claim 1 or 2, wherein the displaying a guide frame on the shooting screen comprises:
    获取所述摄像机的分辨率;Obtain the resolution of the camera;
    在所述拍摄画面上显示引导框,所述引导框的大小基于所述分辨率确定。A guide frame is displayed on the shooting screen, and the size of the guide frame is determined based on the resolution.
  4. 根据权利要求1-3中任意一项所述的方法,其特征在于,所述第二区域和第三区域的透明度不同,所述第三区域为所述拍摄画面中除所述第二区域之外的区域。The method according to any one of claims 1-3, wherein the transparency of the second area and the third area are different, and the third area is the area except the second area in the shooting picture. outside area.
  5. 根据权利要求1-4中任意一项所述的方法,其特征在于,所述在所述拍摄画面上显示引导框包括:The method according to any one of claims 1-4, wherein the displaying a guide frame on the shooting screen comprises:
    在所述拍摄画面上显示所述第二区域的轮廓。The outline of the second area is displayed on the photographing screen.
  6. 根据权利要求1-5中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-5, wherein the method further comprises:
    确定所述摄像机的聚焦区域和/或编码参数,以使所述第二区域的清晰度高于第三区域的清晰度,所述第三区域为所述拍摄画面中除所述第二区域之外的区域。Determining the focus area and/or encoding parameters of the camera so that the definition of the second area is higher than the definition of a third area, and the third area is the area other than the second area in the shooting picture outside area.
  7. 根据权利要求1-6中任意一项所述的方法,其特征在于,所述在所述拍摄画面上显示引导框之后,所述方法还包括:The method according to any one of claims 1-6, wherein after displaying the guide frame on the shooting screen, the method further comprises:
    检测所述第一区域和所述第二区域的重合度;detecting the coincidence degree of the first area and the second area;
    输出所述重合度。Output the coincidence degree.
  8. 根据权利要求7所述的方法,其特征在于,所述输出所述重合度包括:The method according to claim 7, wherein said outputting said coincidence degree comprises:
    通过语音播放和/或文字显示的形式输出所述重合度。The coincidence degree is output in the form of voice playback and/or text display.
  9. 根据权利要求7或8所述的方法,其特征在于,所述检测所述第一区域和所述第二区域的重合度之后,所述方法还包括:The method according to claim 7 or 8, characterized in that, after detecting the coincidence degree of the first area and the second area, the method further comprises:
    在所述重合度不高于所述第一阈值的情况下,输出第一提示信息,所述第一提示信息用于提示所述用户调整所述摄像机的拍摄角度和/或拍摄参数。When the coincidence degree is not higher than the first threshold, output first prompt information, where the first prompt information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera.
  10. 根据权利要求7-9中任意一项所述的方法,其特征在于,所述检测所述第一区域和所述第二区域的重合度之后,所述方法还包括:The method according to any one of claims 7-9, wherein after detecting the degree of overlap between the first area and the second area, the method further comprises:
    在所述重合度高于所述第一阈值的情况下,输出第二提示信息,所述第二提示信息用于指示已完成所述摄像机的调试。When the coincidence degree is higher than the first threshold, second prompt information is output, where the second prompt information is used to indicate that the debugging of the camera has been completed.
  11. 根据权利要求1-10中任意一项所述的方法,其特征在于,所述在所述拍摄画面上显示引导框,包括:The method according to any one of claims 1-10, wherein the displaying a guide frame on the shooting screen comprises:
    响应于第一指令,在所述拍摄画面上显示引导框,所述第一指令用于指示进入调试模式。In response to a first instruction, a guide frame is displayed on the shooting screen, and the first instruction is used to indicate entering a debugging mode.
  12. 根据权利要求11所述的方法,其特征在于,所述响应于第一指令,在所述拍摄画面上显示引导框之后,所述方法还包括:The method according to claim 11, characterized in that, after the guide frame is displayed on the shooting screen in response to the first instruction, the method further comprises:
    响应于第二指令,停止在所述拍摄画面上显示所述引导框,所述第二指令用于指示退出所述调试模式。Stop displaying the guide frame on the shooting screen in response to a second instruction, where the second instruction is used to indicate to exit the debugging mode.
  13. 根据权利要求1-12中任意一项所述的方法,其特征在于,所述第一区域的位置或形状与所述拍摄角度相关,所述第一区域的大小与所述拍摄参数相关。The method according to any one of claims 1-12, wherein the position or shape of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameter.
  14. 根据权利要求2-13中任意一项所述的方法,其特征在于,所述标靶类型为人或车。The method according to any one of claims 2-13, wherein the target type is a person or a vehicle.
  15. 一种调试摄像机的装置,其特征在于,包括:A device for debugging a camera, characterized in that it comprises:
    获取模块,用于获取摄像机拍摄监控场景得到的拍摄画面,所述拍摄画面中包括调试标靶,所述调试标靶为设置于所述监控场景中的、用于调试所述摄像机的拍摄目标;An acquisition module, configured to acquire a shooting picture obtained by shooting a monitoring scene by a camera, wherein the shooting picture includes a debugging target, and the debugging target is a shooting target set in the monitoring scene and used for debugging the camera;
    显示模块,用于在所述拍摄画面上显示引导框,所述引导框用于引导用户调整所述摄像机的拍摄角度和/或拍摄参数,以使第一区域和第二区域的重合度满足所述用户的需求,所述第一区域为所述拍摄画面中所述调试标靶所在的区域,所述第二区域为所述拍摄画面中所述引导框所在的区域。A display module, configured to display a guide frame on the shooting screen, and the guide frame is used to guide the user to adjust the shooting angle and/or shooting parameters of the camera, so that the coincidence degree of the first area and the second area satisfies the required According to the needs of the user, the first area is the area where the debugging target is located in the shooting picture, and the second area is the area where the guide frame is located in the shooting picture.
  16. 根据权利要求15所述的装置,其特征在于,所述获取模块还用于获取所述调试标靶的标靶类型;The device according to claim 15, wherein the acquiring module is further configured to acquire the target type of the debugging target;
    所述显示模块在所述拍摄画面上显示引导框时,所述引导框的类型基于所述标靶类型确定。When the display module displays a guide frame on the shooting screen, the type of the guide frame is determined based on the target type.
  17. 根据权利要求15或16所述的装置,其特征在于,所述获取模块还用于获取所述摄像机的分辨率;The device according to claim 15 or 16, wherein the acquiring module is also used to acquire the resolution of the camera;
    所述显示模块在所述拍摄画面上显示引导框时,所述引导框的大小基于所述分辨率确定。When the display module displays a guide frame on the shooting screen, the size of the guide frame is determined based on the resolution.
  18. 根据权利要求15-17中任意一项所述的装置,其特征在于,所述第二区域和第三区域的透明度不同,所述第三区域为所述拍摄画面中除所述第二区域之外的区域。The device according to any one of claims 15-17, wherein the transparency of the second area and the third area are different, and the third area is the area except the second area in the shooting picture. outside area.
  19. 根据权利要求15-18中任意一项所述的装置,其特征在于,所述显示模块具体用于在所述拍摄画面上显示所述第二区域的轮廓。The device according to any one of claims 15-18, wherein the display module is specifically configured to display an outline of the second region on the shooting screen.
  20. 根据权利要求15-19中任意一项所述的装置,其特征在于,还包括:The device according to any one of claims 15-19, further comprising:
    确定模块,用于确定所述摄像机的聚焦区域和/或编码参数,以使所述第二区域的清晰度高于第三区域的清晰度,所述第三区域为所述拍摄画面中除所述第二区域之外的区域。A determining module, configured to determine the focus area and/or encoding parameters of the camera, so that the definition of the second area is higher than that of the third area, and the third area is all but one of the shooting pictures. Areas other than the second area mentioned above.
  21. 根据权利要求15-20中任意一项所述的装置,其特征在于,还包括:The device according to any one of claims 15-20, further comprising:
    检测模块,用于检测所述第一区域和所述第二区域的重合度;a detection module, configured to detect the coincidence degree of the first area and the second area;
    所述显示模块还用于输出所述重合度。The display module is also used to output the coincidence degree.
  22. 根据权利要求21所述的装置,其特征在于,所述显示模块具体用于通过语音播放和/或文字显示的形式输出所述重合度。The device according to claim 21, wherein the display module is specifically configured to output the coincidence degree in the form of voice playback and/or text display.
  23. 根据权利要求20或21所述的装置,其特征在于,所述显示模块还用于在所述重合度不高于所述第一阈值的情况下,输出第一提示信息,所述第一提示信息用于提示所述用户调整所述摄像机的拍摄角度和/或拍摄参数。The device according to claim 20 or 21, wherein the display module is further configured to output first prompt information when the coincidence degree is not higher than the first threshold, and the first prompt The information is used to prompt the user to adjust the shooting angle and/or shooting parameters of the camera.
  24. 根据权利要求20-22中任意一项所述的装置,其特征在于,所述显示模块还用于在所述重合度高于所述第一阈值的情况下,输出第二提示信息,所述第二提示信息用于指示已完成所述摄像机的调试。The device according to any one of claims 20-22, wherein the display module is further configured to output second prompt information when the coincidence degree is higher than the first threshold, and the The second prompt information is used to indicate that the debugging of the camera has been completed.
  25. 根据权利要求15-24中任意一项所述的装置,其特征在于,所述显示模块具体还用于响应于第一指令,在所述拍摄画面上显示引导框,所述第一指令用于指示进入调试模式。The device according to any one of claims 15-24, wherein the display module is further configured to display a guide frame on the shooting screen in response to a first instruction, the first instruction being used to Indicates to enter debug mode.
  26. 根据权利要求25所述的装置,其特征在于,所述显示模块具体还用于响应于第二指令,停止在所述拍摄画面上显示所述引导框,所述第二指令用于指示退出所述调试模式。The device according to claim 25, wherein the display module is further configured to stop displaying the guide frame on the shooting screen in response to a second instruction, and the second instruction is used to indicate to exit the Described debug mode.
  27. 根据权利要求15-26中任意一项所述的装置,其特征在于,所述第一区域的位置或形状与所述拍摄角度相关,所述第一区域的大小与所述拍摄参数相关。The device according to any one of claims 15-26, wherein the position or shape of the first area is related to the shooting angle, and the size of the first area is related to the shooting parameter.
  28. 根据权利要求16-27中任意一项所述的装置,其特征在于,所述标靶类型为人或车。The device according to any one of claims 16-27, wherein the target type is a person or a vehicle.
  29. 一种调试摄像机的装置,其特征在于,包括:A device for debugging a camera, characterized in that it comprises:
    处理器和存储器,所述存储器用于存储程序;a processor and a memory for storing programs;
    所述处理器用于通过执行所述程序,以实现如权利要求1-14中任意一项所述的方法。The processor is configured to implement the method according to any one of claims 1-14 by executing the program.
  30. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-14中任意一项所述的方法。A computer-readable storage medium on which a computer program is stored, wherein the computer program implements the method according to any one of claims 1-14 when executed by a processor.
PCT/CN2022/120798 2022-01-11 2022-09-23 Method for debugging camera, and related device WO2023134215A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210028167.3 2022-01-11
CN202210028167 2022-01-11
CN202210312374.1 2022-03-28
CN202210312374.1A CN116471477A (en) 2022-01-11 2022-03-28 Method for debugging camera and related equipment

Publications (1)

Publication Number Publication Date
WO2023134215A1 true WO2023134215A1 (en) 2023-07-20

Family

ID=87183013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120798 WO2023134215A1 (en) 2022-01-11 2022-09-23 Method for debugging camera, and related device

Country Status (2)

Country Link
CN (1) CN116471477A (en)
WO (1) WO2023134215A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141781A1 (en) * 2008-12-05 2010-06-10 Tsung Yi Lu Image capturing device with automatic object position indication and method thereof
US20100171874A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co., Ltd. Apparatus and method for photographing image in portable terminal
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
CN109344715A (en) * 2018-08-31 2019-02-15 北京达佳互联信息技术有限公司 Intelligent composition control method, device, electronic equipment and storage medium
CN111277759A (en) * 2020-02-27 2020-06-12 Oppo广东移动通信有限公司 Composition prompting method and device, storage medium and electronic equipment
CN111757098A (en) * 2020-06-30 2020-10-09 北京百度网讯科技有限公司 Debugging method and device of intelligent face monitoring camera, camera and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141781A1 (en) * 2008-12-05 2010-06-10 Tsung Yi Lu Image capturing device with automatic object position indication and method thereof
US20100171874A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co., Ltd. Apparatus and method for photographing image in portable terminal
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
CN109344715A (en) * 2018-08-31 2019-02-15 北京达佳互联信息技术有限公司 Intelligent composition control method, device, electronic equipment and storage medium
CN111277759A (en) * 2020-02-27 2020-06-12 Oppo广东移动通信有限公司 Composition prompting method and device, storage medium and electronic equipment
CN111757098A (en) * 2020-06-30 2020-10-09 北京百度网讯科技有限公司 Debugging method and device of intelligent face monitoring camera, camera and medium

Also Published As

Publication number Publication date
CN116471477A (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
WO2018201809A1 (en) Double cameras-based image processing device and method
US10269175B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
TWI380232B (en)
CN109376592B (en) Living body detection method, living body detection device, and computer-readable storage medium
US11562471B2 (en) Arrangement for generating head related transfer function filters
US20210337136A1 (en) Method and apparatus for processing video, and storage medium
WO2017016030A1 (en) Image processing method and terminal
CN109478227B (en) Iris or other body part recognition on computing devices
WO2018058934A1 (en) Photographing method, photographing device and storage medium
TWI496109B (en) Image processor and image merging method thereof
US20130329124A1 (en) Image processing apparatus and image processing method
TW200823797A (en) Face detector, image pickup device, and face detecting method
US20190166344A1 (en) Method and device for image white balance, storage medium and electronic equipment
JP2004320286A (en) Digital camera
WO2018137264A1 (en) Photographing method and photographing apparatus for terminal, and terminal
TW201801516A (en) Image capturing apparatus and photo composition method thereof
US20120121129A1 (en) Image processing apparatus
US10987198B2 (en) Image simulation method for orthodontics and image simulation device thereof
WO2016101524A1 (en) Method and apparatus for correcting inclined shooting of object being shot, mobile terminal, and storage medium
CN110166680B (en) Device imaging method and device, storage medium and electronic device
US20150138309A1 (en) Photographing device and stitching method of captured image
CN108289170B (en) Photographing apparatus, method and computer readable medium capable of detecting measurement area
WO2023134215A1 (en) Method for debugging camera, and related device