WO2022062421A1 - 一种拍摄设备的检测方法及相关装置 - Google Patents

一种拍摄设备的检测方法及相关装置 Download PDF

Info

Publication number
WO2022062421A1
WO2022062421A1 PCT/CN2021/093015 CN2021093015W WO2022062421A1 WO 2022062421 A1 WO2022062421 A1 WO 2022062421A1 CN 2021093015 W CN2021093015 W CN 2021093015W WO 2022062421 A1 WO2022062421 A1 WO 2022062421A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
photographing device
parameter
image
preset range
Prior art date
Application number
PCT/CN2021/093015
Other languages
English (en)
French (fr)
Inventor
徐天宇
闫冬升
李纪楷
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022062421A1 publication Critical patent/WO2022062421A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a detection method of a photographing device and a related device.
  • Smart cameras are no longer limited to the original video recording and general alarm functions, but have evolved intelligent functions such as face detection, face recognition, vehicle detection and license plate recognition.
  • the present application provides a detection method and a related device for a photographing device, which can detect a picture captured by a smart camera and give relevant suggestions.
  • the present application provides a method for detecting a photographing device and a related device.
  • the method can acquire an image photographed by the photographing device and detect a target in the image; determine the effect parameters of the target, and the effect parameters mainly include the relative relative value of the target relative to the photographing device. position; if the effect parameter is not within the preset range, the adjustment method of the relative position of the photographing device is determined according to the comparison result between the effect parameter and the preset range.
  • the method determines whether the target photographed by the photographing device meets the requirements by comparing the difference between the effect parameter of the target and the preset range, and determines the adjustment method of the relative position of the photographing device according to the comparison result between the effect parameter and the preset range, Recommendations are given so that inexperienced installers can make adjustments based on this adjustment method or directly use this adjustment method to automatically adjust.
  • the relative position of the target relative to the photographing device includes an angle parameter of the target relative to the photographing device, and the angle parameter includes a pitch angle and/or a yaw angle and/or a roll angle.
  • the pitch angle and/or the yaw angle and/or the roll angle can be used to accurately determine the direction of the target relative to the photographing device, so as to determine an appropriate adjustment method, perform automatic adjustment or provide adjustment suggestions.
  • the adjustment method for determining the relative position of the photographing device according to the comparison result of the effect parameter of the target and the preset range includes: if the pitch angle in the angle parameter is not within the preset range , then determine that one of the adjustment methods is to rotate the photographing device up or down; if the yaw angle in the angle parameter is not within the preset range, then determine that one of the adjustment methods is to rotate the photographing device to the left or right; if the angle parameter If the roll angle in is not within the preset range, determine that one of the adjustment methods is to rotate the shooting device clockwise or counterclockwise.
  • the pitch angle and/or the yaw angle and/or the roll angle can be used to accurately determine the direction of the target relative to the photographing device, so as to determine an appropriate adjustment method, perform automatic adjustment or provide adjustment suggestions.
  • the relative position of the target relative to the photographing device includes a position parameter of the target in the image, and the position parameter is the coordinate of the target in the image.
  • the position of the target relative to the photographing device can be accurately determined by using the coordinates of the target in the image, so as to determine an appropriate adjustment method, perform automatic adjustment or provide adjustment suggestions.
  • the adjustment method for determining the relative position of the photographing device according to the comparison result between the effect parameter of the target and the preset range includes: if the abscissa parameter in the position parameter is not within the preset range If the ordinate parameter in the position parameter is not within the preset range, then it is determined that one of the adjustment methods is to translate the photographing device vertically.
  • the position of the target relative to the photographing device can be accurately determined by using the coordinates of the target in the image, so as to determine an appropriate adjustment method, perform automatic adjustment or provide adjustment suggestions.
  • the effect parameter includes the resolution of the target
  • the adjustment method for determining the relative position of the photographing device according to the comparison result between the effect parameter of the target and the preset range includes: if the target is in the image If the resolution in the image is less than the first preset threshold, it is determined that one of the adjustment methods is to shorten the distance between the shooting device and its shooting object or lengthen the focal length of the shooting device; if the resolution of the target in the image is greater than the first preset threshold, then One way to determine the adjustment is to increase the distance between the camera and its subject or to shorten the focus of the camera.
  • using the resolution of the target in the image can accurately determine the size of the area occupied by the target in the image, and the size is related to the shooting distance between the target and the shooting device, so the appropriate shooting distance can be determined.
  • Adjustment method perform automatic adjustment or provide adjustment suggestions.
  • the effect parameter includes a sharpness parameter of the target
  • the adjustment method for determining the relative position of the shooting device according to the comparison result between the effect parameter of the target and the preset range includes: If the sharpness parameter is not within the preset range, it is determined that one of the adjustment methods is to adjust the focal length of the photographing device.
  • the sharpness parameter of the target can be used to accurately determine the sharpness of the target in the image, and the sharpness is related to whether the photographing device accurately focuses on the target, so the appropriate adjustment can be determined according to the focal length mode, perform automatic adjustments or provide adjustment suggestions.
  • the method further includes: adjusting the photographing device and/or according to the adjustment method. Or show how to adjust.
  • the adjustment method may be displayed, or may be used as a basis for automatic adjustment.
  • the method further includes: obtaining The parameter information of the shooting device; the parameter information of the shooting device, the effect parameters of the target and the image are synthesized into a display image and displayed.
  • the current information can be clearly displayed through a display image, so that the installer can obtain key information when observing the display image, and it is easier to adjust the photographing equipment.
  • the present application provides a detection device for photographing equipment, including: an acquisition module for acquiring an image; a processing module for detecting a target in the image according to the image; and a processing module for determining effect parameters of the target , the effect parameter includes the relative position of the target relative to the shooting device; if the effect parameter of the target is within the preset range, the processing module is further configured to determine the adjustment of the relative position of the shooting device according to the comparison result between the effect parameter of the target and the preset range Way.
  • the relative position of the target relative to the photographing device includes an angle parameter of the target relative to the photographing device, and the angle parameter includes a pitch angle and/or a yaw angle and/or a roll angle.
  • the processing module is further configured to: if the pitch angle in the angle parameter is not within the preset range, determine that one of the adjustment methods is to rotate the photographing device upward or downward; If the yaw angle in the angle parameter is not within the preset range, then one of the adjustment methods is to turn the camera left or right; if the roll angle in the angle parameter is not within the preset range, one of the adjustment methods is determined to be Turn the camera clockwise or counterclockwise.
  • the relative position of the target relative to the photographing device includes a position parameter of the target in the image, and the position parameter is the coordinate of the target center in the image.
  • the processing module is further configured to: if the abscissa parameter in the position parameter is not within the preset range, determine that one of the adjustment methods is to horizontally translate the photographing device; if the position parameter If the ordinate parameter in is not within the preset range, it is determined that one of the adjustment methods is to vertically translate the shooting device.
  • the effect parameter includes the resolution of the target
  • the processing module is further configured to: if the resolution of the target in the image is smaller than the first preset threshold, determine one of the adjustment methods One is to shorten the distance between the photographing device and its photographing object or lengthen the focal length of the photographing device; if the resolution of the target in the image is greater than the first preset threshold, one of the adjustment methods is determined to be to adjust the distance between the photographing device and its photographing object farther or Shorten the focal length of the camera.
  • the effect parameter includes a definition parameter of the target
  • the processing module is further configured to: if the definition parameter of the target is not within the preset range, determine one of the adjustment methods as Adjust the focus of the camera.
  • the processing module is further configured to: adjust the photographing device and/or display the adjustment manner according to the adjustment manner.
  • the acquiring module is further configured to acquire parameter information of the shooting device; the processing module is further configured to: synthesize the parameter information of the shooting device, the effect parameters of the target and the image into a A display image and display it.
  • the present application provides an imaging device including a lens, a sensor and a processor, wherein: the lens is used to receive light, and the sensor is used to perform photoelectric conversion on the light received by the lens to generate an image; method on the one hand.
  • FIG. 1 is a schematic structural diagram of an embodiment of the present application
  • FIG. 2 is a flowchart of a method for detecting a photographing device provided by an embodiment of the present application
  • FIG. 3 is an operation interface diagram of a client in an embodiment of the application
  • FIG. 4 is a diagram of a diagnosis result feedback interface provided by an embodiment of the present application.
  • FIG. 5 is an example diagram of a display image provided by an embodiment of the present application.
  • FIG. 8 is a signaling diagram of an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a detection device of a photographing device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a camera device according to an embodiment of the present application.
  • Embodiments of the present application provide a method for detecting a photographing device and a related device, which can detect a picture captured by a smart camera, automatically adjust the camera, or give relevant suggestions.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • Smart cameras are no longer limited to the original video recording and general alarm functions, but have evolved intelligent functions such as face detection, face recognition, vehicle detection and license plate recognition. Face recognition and license plate recognition of smart cameras require clear and accurate snapshots, which impose strict requirements on the erection and installation of cameras.
  • the installation of existing cameras is mostly performed by installers who are not technically developed, and the experience of the installers is also uneven, which leads to great differences in the final imaging effects of the cameras.
  • the camera is also shipped with erection and installation instructions, such as installation height and installation angle, the on-site installation environment is often complex and may not meet the erection conditions, resulting in a low face recognition or license plate recognition rate.
  • the embodiments of the present application provide a method for detecting a photographing device and a related device, which can detect a picture captured by a smart camera, automatically adjust the camera, or give relevant suggestions.
  • a schematic diagram of the architecture of the embodiments of the present application is first given:
  • FIG. 1 is a schematic structural diagram of an embodiment of the present application.
  • the architecture includes a camera device and a client/server.
  • the photographing device may be any device capable of photographing images, such as a fixed camera, a pan-tilt camera, a surveillance camera, and the like, which is not limited in this embodiment of the present application.
  • the client can be a client installed on a terminal device, and the shooting device can communicate with the terminal device through a wired connection or wireless connection.
  • the terminal device can be a mobile phone, computer, tablet, etc., and the client can be installed on a mobile phone or tablet.
  • An application (application, APP) may also be a wed client on a web page, which is not limited in this embodiment of the present application.
  • one of the implementation manners is: the user clicks a trigger button on the client terminal, so that the terminal device sends a trigger signal to the photographing device. Then, the photographing device may execute the detection method of the photographing device provided by the embodiment of the present application after acquiring the image, and transmit the obtained adjustment method to the client through a corresponding interface, so that the client can present the adjustment method.
  • the user can manually adjust the shooting device directly according to the presented adjustment method, or click the automatic adjustment button, and the terminal device will send an automatic adjustment instruction to the shooting device, so that the shooting device can automatically adjust.
  • the PTZ camera can be adjusted up, down, left, right, etc. according to the automatic adjustment instructions.
  • the user clicks a trigger button on the client, so that the terminal device sends a trigger signal to the photographing device.
  • the photographing device may send the acquired image to the client, so that the client executes the method for detecting the photographing device provided by the embodiment of the present application, and then presents the obtained adjustment method to the user. Subsequent situations are similar to the previous implementation, and are not repeated here.
  • the photographing device can communicate with the server through a wired connection or a wireless connection
  • the server can be connected with multiple terminal devices
  • the multiple terminal devices can access the server to obtain information about the photographing device.
  • the photographing device may execute the detecting method of the photographing device provided by the embodiment of the present application after acquiring the image, and adjust the obtained image through the corresponding interface.
  • the method is transmitted to the server, and the server provides a corresponding interface, so that the user can query the adjustment method through the terminal device.
  • the user can manually adjust the photographing device according to the queried adjustment method, or can send an automatic adjustment instruction to the photographing device through the terminal device and the server, so that the photographing device can be automatically adjusted.
  • the server may receive the image data of the photographing device, and then execute the detection method for the photographing device provided by the embodiment of the present application to obtain a related adjustment method. Other situations are similar to the previous implementation manner, here No longer.
  • the method for detecting a photographing device provided in the embodiments of the present application may be executed by a photographing device, a client, or a server.
  • the embodiments of the present application are described by taking a photographing device as an example, and the implementation of other devices is similar, which is not repeated here. Repeat.
  • FIG. 2 is a flowchart of a method for detecting a photographing device provided by an embodiment of the present application. The process includes:
  • the photographing device can capture images.
  • the photographing device can convert the data format of the photographed image into a format suitable for subsequent processing.
  • the photographing device may convert the photographed image into a YUV data format. Due to differences in systems, algorithms, programming languages, etc. adopted by the photographing device in practical applications, the data format of the image may adopt other suitable formats, and the embodiment of the present application does not limit the data format of the image.
  • FIG. 3 is an operation interface diagram of a client terminal in an embodiment of the present application.
  • the upper right side is the detection control interface
  • the first column can select the type of image detection, that is, the camera is mainly used to detect faces or license plates.
  • the photographing device may also be used to detect other types of targets, and corresponding buttons can be added to this column. It can be understood that, after the user selects one of the buttons in the detection control interface, the photographing device can select the target detection algorithm corresponding to the selected button to detect the target in step 202 . Therefore, the embodiments of the present application can actually perform snapshot analysis on faces and license plates respectively, and can also perform snapshot analysis on other types of targets according to actual needs, which greatly expands the application scenarios of the present application.
  • the second column is the detection area list.
  • the user can select “Polygon Area Drawing”, “Rectangular Area Drawing”, “Full Screen Drawing” or “Clear Drawing Graphics” among them. Among them, the user clicks “Polygon Area Drawing” to add a preset polygonal area to the detection area, the user clicks “Rectangle Area Drawing” to add a preset rectangular area to the detection area, and the user clicks "Full Screen Drawing” to Set the detection area to full screen, the user can click “Clear Drawing Graphics” to cancel all the above settings.
  • the third column is the “Manual Snapshot” button or the “Enable Continuous Tuning” button.
  • the shooting device can perform step 201 to capture images every 300 seconds, and perform subsequent steps to obtain relevant adjustment methods, which are presented in the diagnosis result feedback interface on the lower right side.
  • the lower right side is the diagnosis result feedback interface, which is used to display the detected information and related adjustment methods.
  • the user has not clicked the "Manual Snapshot” button or the “Enable Continuous Tuning” button, so the diagnostic result feedback interface does not present the diagnostic result yet.
  • the left side is the image captured by the photographing device and the adjustment buttons, mode options and speed options of the photographing device. This embodiment of the present application does not limit the number of adjustment buttons.
  • the adjustment button may be an up and down rotation button, a left and right rotation button, a focal length adjustment button, etc. of the photographing device, which are not limited in this embodiment of the present application.
  • the mode options include continuous mode, pause mode, etc., which are used to control whether the shooting device performs continuous shooting or paused shooting. If the user selects the continuous mode, the user can continue to select the speed of continuous shooting on this interface. In practical applications, other buttons may also be set in the operation interface diagram, which is not limited in this embodiment of the present application.
  • the photographing device may detect the target in the image through the target detection algorithm.
  • the target may be an object such as a human face, a license plate, or the like, or may be a pattern, a number, or the like, and the specific type of the target is not limited in this embodiment of the present application.
  • the photographing device can be configured with different target detection algorithms to detect targets in images.
  • the photographing device may detect a human face in the image through a face detection algorithm.
  • the photographing device can detect the license plate in the image through the license plate recognition algorithm.
  • the above-mentioned target detection algorithms such as the face detection algorithm and the license plate recognition algorithm can be implemented by using a neural network model, and the embodiment of the present application does not limit which target detection algorithm is specifically used.
  • FIG. 4 is a diagram of a diagnosis result feedback interface provided by an embodiment of the present application.
  • the left side of the figure is the image captured by the camera.
  • the detected target is circled by a rectangular frame selection area in the image.
  • the photographing device may detect multiple targets in the image, and the photographing device can select all these targets in a frame.
  • the photographing device may randomly select one of the targets to determine the adjustment method, or may select the target with the largest rectangular frame selection area to determine the adjustment method, which is not limited in this embodiment of the present application.
  • the effect parameter of the target may include a relative position of the target relative to the photographing device.
  • the relative position includes parameters such as the direction and position of the target relative to the photographing device, which are not limited in this embodiment of the present application.
  • the effect parameter of the target is an angle parameter of the target relative to the photographing device, such as Euler angle, including pitch angle and/or yaw angle and/or roll angle.
  • the photographing device can determine the Euler angle of the target relative to the photographing device through the angle detection algorithm.
  • This embodiment of the present application does not limit which angle detection algorithm is used.
  • the photographing device when the photographing device detects multiple targets from the image, one of the targets can be selected to calculate the effect parameters, and in step 204, the adjustment method can be determined according to the effect parameters of the target, or the effect parameters of all the targets can be calculated. effect parameters, and then select one of the targets according to the effect parameters to perform step 204 .
  • the photographing device may also select a target through other selection methods, which is not limited in this embodiment of the present application.
  • the effect parameter of the target further includes a position parameter of the target in the image
  • the position parameter may be the coordinates of (the center of) the target in the image (including the abscissa parameter and the ordinate parameter).
  • the photographing device may establish a Cartesian coordinate system with the endpoint at the lower left corner of the image as the origin to determine the position parameter of the target in the image. In practical applications, the photographing device may also determine the position parameter of the target in the image in other ways, which is not limited in the embodiment of the present application.
  • the effect parameter of the target also includes the resolution of the target.
  • the resolution of the target may refer to the number of pixels of the rectangular area corresponding to the target.
  • the resolution of 25px*25px refers to the horizontal and vertical arrangement of 25 rows/columns of pixels, so there are 625 pixels in total.
  • the size of the pixels is generally the same, so the larger the resolution of the target, the larger the corresponding rectangular area of the target. Therefore, according to the resolution of the target, it can be determined whether the size of the rectangular area corresponding to the target meets the requirements.
  • the effect parameter of the target includes a sharpness parameter of the target.
  • the definition parameter of the target may refer to a parameter related to the definition such as a bit rate. The camera can read these parameters from the image.
  • the photographing device can also obtain its own parameter information, such as the UID, name, and IP address of the photographing device, and obtain the time of the photographed image, and then use these parameter information, the effect parameters of the target and this image are collectively referred to as one display image.
  • FIG. 5 is an example diagram of a display image provided by an embodiment of the present application.
  • the top parameters are the name of the shooting device, IP address, the time when the image was taken, and the resolution of the target.
  • the photographing device can provide a corresponding interface, so that the client or the server can obtain the display image and display it through the interface.
  • the parameter information of the photographing device and the effect parameters of the target acquired by the photographing device can be displayed.
  • the user can obtain the information from the relevant interface of the photographing device through the terminal device, and the terminal device can display the information on the display screen.
  • the name of the shooting device is Camera 1
  • the IP address is XXXX
  • the time of shooting the image is 11:11 on January 1, 2020
  • the information of target 1 is: position parameter[x:1000y: 1000], resolution [25px*25px]
  • the information of target 2 is position [x:100y:100], resolution [100px*100px].
  • the photographing device may also detect other information, so that the user can observe the information through the display screen of the terminal device.
  • the embodiment of the present application does not limit the amount and type of displayed information.
  • the effect parameters of the target are not within the preset range, indicating that the target does not meet the shooting requirements, and the installer needs to further adjust the shooting equipment to obtain better shooting effects.
  • the photographing device can determine the adjustment method according to the comparison result between the effect parameter of the target and the preset range, so as to automatically adjust or prompt the installer to adjust.
  • the specific process of determining the adjustment method is as follows:
  • the effect parameter of the target includes the relative position of the target relative to the shooting device, the relative position includes the angle parameter of the target relative to the shooting device, and the angle parameter includes the pitch angle and/or the yaw angle and/or the roll angle, then if the angle parameter If the pitch angle in the parameter is not within the preset range, the shooting device determines that one of the adjustment methods is to rotate the shooting device up and down; if the yaw angle in the angle parameter is not within the preset range, the shooting device determines that one of the adjustment methods is to rotate the shooting device left and right. device; if the roll angle in the angle parameter is not within the preset range, the shooting device determines that one of the adjustment methods is to rotate the shooting device clockwise or counterclockwise.
  • the photographing device detects that the pitch angle of the target is not within the preset range, and determines that one of the adjustment methods is to rotate the photographing device up and down. Further, if the target lowers its head, the adjustment method is to shoot the device upwards.
  • the adjustment method can be expressed in different words, for example, "please raise the camera to reduce the pitch angle" in FIG. 4 , and the embodiment of the present application does not limit the specific word expression.
  • the effect parameter of the target includes the relative position of the target relative to the shooting device.
  • the relative position includes the position parameter of the target in the image.
  • the position parameter is the coordinate of the target in the image. If the abscissa parameter in the position parameter is not within the preset range If the ordinate parameter in the position parameter is not within the preset range, then it is determined that one of the adjustment methods is to translate the photographing device vertically. Exemplarily, in FIG. 4 , the target on the lower left side of the image is relatively left, so the photographing device detects that the abscissa parameter of the target is not within the preset range, and determines that one of the adjustment methods is to horizontally translate the photographing device. Specifically, the adjustment method is "Please pan the camera to the left so that the target is in the center of the image".
  • the target includes the resolution of the target, then if the resolution of the target is smaller than the first preset threshold, then determine that one of the adjustment methods is to shorten the distance between the shooting device and its shooting object or lengthen the focal length of the shooting device; If the ratio is greater than the first preset threshold, it is determined that one of the adjustment methods is to increase the distance between the photographing device and its photographing object or shorten the focal length of the photographing device.
  • the photographing device has the function of adjusting the focal length, it can generally be adjusted by adjusting the focal length.
  • the method of adjusting the distance can be determined for adjustment.
  • the resolution of the target in FIG. 4 is smaller than the first preset threshold, indicating that the rectangular area corresponding to the target is not large enough, and the focal length needs to be shortened (the focal length of the photographing device is lengthened).
  • the effect parameter of the target includes the sharpness parameter of the target. If the sharpness parameter of the target is not within the preset range, it is determined that one of the adjustment methods is to adjust the focal length of the photographing device. Exemplarily, if the clarity of the target image in FIG. 4 is sufficient, no adjustment is required.
  • the photographing device may adjust the photographing device according to the adjustment method obtained in step 204 and/or display the adjustment method.
  • the user may click the "Auto Adjustment” button on the operation interface as shown in FIG. 4 to trigger the process of adjusting the photographing device according to the adjustment method obtained in step 204 .
  • the photographing device receives the trigger signal related to the "automatic adjustment” button, the photographing device can automatically adjust according to the adjustment method obtained in step 204 . It is understandable that a general camera can adjust the focal length, while a pan-tilt camera can not only adjust the focal length, but also rotate the camera.
  • the photographing device may determine relevant adjustment functions according to its own device configuration to match the adjustment method obtained in step 204 .
  • the device configuration of the photographing device in this embodiment of the present application is not limited.
  • the "one-key adjustment function" can further reduce the user's operating cost and improve the usability of the system.
  • the photographing device can display the adjustment method obtained in step 204, as shown in FIG. 4 .
  • the adjustment method is converted into a common language to prompt the user how to adjust, so that the installer can better perform manual adjustment. This display method has better reference for users and improves the ease of use for users.
  • FIG. 6 is an effect diagram after adjustment provided by an embodiment of the present application.
  • the shooting equipment can shoot a more suitable target image, obtain a better imaging effect, and make face recognition or license plate recognition more accurate.
  • FIG. 7 is a schematic flowchart of an embodiment of the present application.
  • the capture device obtains the image frame closest to the capture moment from the live video stream for preprocessing, and converts it into The image information that the algorithm can process.
  • the shooting device can extract the feature information in the image information through the algorithm model, and then use the feature information to fit the original data model to give the algorithm detection result, such as whether there is a target in the image information and the relative relationship of the detected target. position, and finally integrated into the output of the algorithm.
  • the business processing module in the photographing device will convert the algorithm detection result into adjustment suggestions and feed it back to the user.
  • the user can make manual adjustments according to the adjustment suggestions or instruct the shooting equipment to make automatic adjustments.
  • FIG. 8 is a signaling diagram of an embodiment of the present application.
  • the client may be a web client, or may be a built-in client on the camera.
  • the user can access the web client through the terminal device, and the web client can communicate with the camera, obtain the information of the camera, and send instructions to the camera.
  • the client can display a video preview interface (as shown in Figure 3), trigger an operation switch (such as a "manual capture” button) and a result display (as shown in Figure 4).
  • the request sent by the client to the core management module is mainly used to notify the camera server to determine or adjust the current image, and the request mainly includes the determination type (as shown in Figure 3 corresponding to the embodiment of face detection or license plate detection). ), and basic information such as the determination mode (as shown in FIG. 3 , the “manual capture” button and the “start continuous tuning” button in the corresponding embodiment correspond to two modes respectively) and other basic information.
  • YuvService represents the live video stream data processing module of the camera server, and the image data used in the determination process is obtained from this module; this module is responsible for image processing and video encoding and decoding, and converts the image into data that can be processed by the algorithm.
  • ImgEvalChain is the core management module of the camera server. After receiving the request sent by the client, the ImgEvalChain module will obtain the image data from the YuvService module, and then process the obtained data and send it to the AlgProcess module for processing; the processing result of the AlgProcess module It will encapsulate the image data of YuvService into a request request and send it to the ImgEncode module. After the ImgEncode module processes the request, it returns the response. The response mainly contains the diagnostic pictures processed by the ImgEncode module.
  • ImgEvalChain converts the algorithm results through natural semantics (that is, the above figure In the corresponding embodiment, step 204) determines an adjustment method, generates an adjustment suggestion, and encapsulates it together with the diagnosis picture and returns it to the client.
  • the client side obtains the diagnosis result, it can send an AutoAdjust request to notify the camera to perform automatic adjustment.
  • AlgProcess is the technical core module of the present invention.
  • the processing of the AlgProcess module is also completed in the camera measurement. It is mainly responsible for processing the data in the image, performing target detection and target scoring (including but not limited to pixel size, target position, target angle, definition) , which provides the most basic algorithm results.
  • ImgEncode is the module responsible for the visualization of diagnosis results on the camera server. This module integrates the basic information of the device, captured images, and algorithm output results (that is, the effect parameters of the target) into one picture (such as the picture shown in Figure 5 above). The customer The terminal gives the result intuitively through the picture, allowing users to get feedback.
  • an algorithm is used to capture and analyze the target in the scene, and the analysis result is given, which can quantify the quality of the erection angle and height during the chemical survey, and reduce the subjective judgment of human factors.
  • the embodiment of the present application can simultaneously perform snapshot analysis on the human face and the license plate, which greatly expands the application scenarios of the present application.
  • FIG. 9 is a schematic diagram of a detection device of a photographing device according to an embodiment of the present application.
  • the apparatus 900 includes:
  • an acquisition module 901 configured to perform step 201 in the respective embodiments corresponding to FIG. 2 above;
  • the processing module 902 is configured to execute step 202, step 203, and step 204 in the respective embodiments corresponding to FIG. 2 above.
  • FIG. 10 is a schematic diagram of an imaging device (eg, a video camera) provided by an embodiment of the present application.
  • the camera device 1000 may be a camera device in the architecture corresponding to FIG. 1 , the camera device 1000 includes a lens 1001, a sensor 1002 and a processor 1003; wherein the lens 1001 is used to receive light, and the sensor 1002 is used to receive light from the lens Photoelectric conversion of the light is performed to generate an image; the processor 1003 is used to: implement the method of the method embodiment shown in FIG. 2 .
  • the function of the lens 1001 is to present the light image of the observed object on the sensor of the camera, which is also called optical imaging.
  • the lens 1001 combines various optical parts (reflectors, transmission mirrors, prisms) of different shapes and different media (plastic, glass or crystal) in a certain way, so that after the light passes through the transmission or reflection of these optical parts, according to people. It is necessary to change the transmission direction of the light and be received by the receiving device to complete the optical imaging process of the object.
  • each lens 1001 is composed of a plurality of groups of lenses with different curved surfaces and curvatures combined at different intervals.
  • the focal length of the lens is determined by the selection of indicators such as spacing, lens curvature, and light transmittance.
  • the main parameters of the lens 1001 include: effective focal length, aperture, maximum image plane, field of view, distortion, relative illumination, etc.
  • the values of each index determine the overall performance of the lens 1001.
  • the sensor 1002 (also known as an image sensor) is a device that converts an optical image into an electronic signal, and is widely used in digital cameras and other electronic and optical devices.
  • Common sensors 1002 include: a charge-coupled device (CCD) and a complementary metal oxide semiconductor (complementary MOS, CMOS). Both CCD and CMOS have a large number (eg, tens of millions) of photodiodes, each photodiode is called a photosensitive cell, and each photosensitive cell corresponds to a pixel. During exposure, the photodiode converts the light signal into an electrical signal containing brightness (or brightness and color) after receiving light, and the image is reconstructed accordingly.
  • CCD charge-coupled device
  • CMOS complementary metal oxide semiconductor
  • Bayer array is a common image sensor technology that can be used in CCD and CMOS.
  • Bayer array uses Bayer color filter to make different pixels only sensitive to one of the three primary colors of red, blue and green. These pixels are interleaved and then interpolated by demosaicing to restore the original image.
  • Bayer arrays can be applied to CCD or CMOS, and sensors using Bayer arrays are also called Bayer sensors.
  • sensor technologies such as X3 (developed by Foveon).
  • X3 technology uses three layers of photosensitive elements, each layer records one of the color channels of RGB, so it can capture all colors on one pixel. Image sensor.
  • the processor 1003 may be a collection of multiple chips, or may be a single chip.
  • the processor 1003 is, for example, a system on a chip (SoC).
  • SoC system on a chip
  • the processor 1003 may include an image processor (ISP), which is used to convert the image generated by the sensor into a three-channel format (eg YUV), improve the image quality, and detect whether there is a target object in the image, and can also be used to analyze the image. to encode.
  • ISP image processor
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

提供了一种拍摄设备的检测方法及相关装置,该方法通过比较目标的效果参数与预设范围之间的差距,确定该拍摄设备拍摄到的目标是否符合要求,并根据效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式,给出了相关建议,让缺乏经验的安装人员能够根据该调整方式进行调整或直接采用该调整方式进行自动调整。

Description

一种拍摄设备的检测方法及相关装置
本申请要求于2020年09月22日提交中国专利局、申请号为202011003383.X、发明名称为“一种摄像机调优判定方法”的中国专利申请的优先权,和要求于2021年01月05日提交中国专利局、申请号为202110016897.7、发明名称为“一种拍摄设备的检测方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种拍摄设备的检测方法及相关装置。
背景技术
随着技术的发展,传统摄像机逐渐被智能摄像机所取代,智能摄像机不再局限于原有的录像和普通告警功能,演变出了人脸检测、人脸识别、车辆检测和车牌识别等智能功能。
智能摄像机的人脸识别和车牌识别需要清晰准确的抓拍图,对智能摄像机的架设安装有很严格的要求。而现有智能摄像机的安装大多是非技术研发的安装人员进行操作,若缺乏经验的安装人员对智能摄像机进行安装调整,一般来说智能摄像机的最终成像效果较差。因此,亟需一种检测方法,来检测智能摄像机的抓拍图是否符合要求,并给出相关建议来帮助缺乏经验的安装人员进行安装调整。
发明内容
本申请提供了一种拍摄设备的检测方法及相关装置,能够检测智能摄像机拍摄的画面,给出相关建议。
本申请提供了一种拍摄设备的检测方法及相关装置,该方法能够获取拍摄设备拍摄的图像并检测该图像中的目标;确定该目标的效果参数,效果参数主要包括目标相对于拍摄设备的相对位置;若该效果参数不在预设范围内,则根据效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式。该方法通过比较目标的效果参数与预设范围之间的差距,确定该拍摄设备拍摄到的目标是否符合要求,并根据效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式,给出了相关建议,让缺乏经验的安装人员能够根据该调整方式进行调整或直接采用该调整方式进行自动调整。
结合第一方面,在本申请的一种实现方式中,目标相对于拍摄设备的相对位置包括目标相对于拍摄设备的角度参数,角度参数包括俯仰角和/或偏航角和/或翻滚角。在该实现方式中,采用俯仰角和/或偏航角和/或翻滚角能够准确地确定目标相对于拍摄设备的方向,从而确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式包括:若角度参数中的俯仰角不在预设范围内,则确定调整方式之一为向上或向下转动拍摄设备;若角度参数中的偏航角不在预设范围内,则确定调整方式之一为向左或向右转动拍摄设备;若角度参数中的翻滚角不在预设范围内, 则确定调整方式之一为顺时针或逆时针转动拍摄设备。在该实现方式中,采用俯仰角和/或偏航角和/或翻滚角能够准确地确定目标相对于拍摄设备的方向,从而确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,目标相对于拍摄设备的相对位置包括目标在图像中的位置参数,位置参数为目标在图像的坐标。在该实现方式中,采用目标在图像的坐标能够准确地确定目标相对于拍摄设备的位置,从而确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式包括:若位置参数中的横坐标参数不在预设范围内,则确定调整方式之一为横向平移拍摄设备;若位置参数中的纵坐标参数不在预设范围内,则确定调整方式之一为纵向平移拍摄设备。在该实现方式中,采用目标在图像的坐标能够准确地确定目标相对于拍摄设备的位置,从而确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,效果参数包括目标的分辨率,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式包括:若目标在图像中的分辨率小于第一预设阈值,则确定调整方式之一为拉近拍摄设备与其拍摄对象的距离或加长拍摄设备的焦距;若目标在图像中的分辨率大于第一预设阈值,则确定调整方式之一为调远拍摄设备与其拍摄对象的距离或缩短拍摄设备的焦距。在该实现方式中,采用目标在图像中的分辨率能够准确地确定目标在图像中所占区域的大小,而该大小关系到目标与拍摄设备的拍摄距离,因此可以根据拍摄距离来确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,效果参数包括目标的清晰度参数,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式包括:若目标的清晰度参数不在预设范围内,则确定调整方式之一为调整拍摄设备的焦距。在该实现方式中,采用目标的清晰度参数能够准确地确定目标在图像中的清晰程度,而该清晰程度关系到拍摄设备是否对目标进行了准确的对焦,因此可以根据焦距来确定合适的调整方式,执行自动调整或提供调整建议。
结合第一方面,在本申请的一种实现方式中,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式之后,方法还包括:根据调整方式调整拍摄设备和/或展示调整方式。该实现方式中,调整方式可以展示出来,或者可以作为自动调整的依据。
结合第一方面,在本申请的一种实现方式中,确定目标的效果参数之后,根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式之前,方法还包括:获取拍摄设备的参数信息;将拍摄设备的参数信息、目标的效果参数以及图像合成为一张展示图像并进行展示。该实现方式中,通过一张展示图像即可清楚展示当前的信息,让安装人员观察该展示图像时能获取关键的信息,更加容易对拍摄设备进行调整。
第二方面,本申请提供一种拍摄设备的检测装置,包括:获取模块,用于获取图像; 处理模块,用于根据图像,检测图像中的目标;处理模块,还用于确定目标的效果参数,效果参数包括目标相对于拍摄设备的相对位置;若目标的效果参数在预设范围内,则处理模块还用于根据目标的效果参数与预设范围的比较结果确定拍摄设备的相对位置的调整方式。
结合第二方面,在本申请的一种实现方式中,目标相对于拍摄设备的相对位置包括目标相对于拍摄设备的角度参数,角度参数包括俯仰角和/或偏航角和/或翻滚角。
结合第二方面,在本申请的一种实现方式中,处理模块还用于:若角度参数中的俯仰角不在预设范围内,则确定调整方式之一为向上或向下转动拍摄设备;若角度参数中的偏航角不在预设范围内,则确定调整方式之一为向左或向右转动拍摄设备;若角度参数中的翻滚角不在预设范围内,则确定调整方式之一为顺时针或逆时针转动拍摄设备。
结合第二方面,在本申请的一种实现方式中,目标相对于拍摄设备的相对位置包括目标在图像中的位置参数,位置参数为目标中心在图像的坐标。
结合第二方面,在本申请的一种实现方式中,处理模块还用于:若位置参数中的横坐标参数不在预设范围内,则确定调整方式之一为横向平移拍摄设备;若位置参数中的纵坐标参数不在预设范围内,则确定调整方式之一为纵向平移拍摄设备。
结合第二方面,在本申请的一种实现方式中,效果参数包括目标的分辨率,处理模块,还用于:若目标在图像中的分辨率小于第一预设阈值,则确定调整方式之一为拉近拍摄设备与其拍摄对象的距离或加长拍摄设备的焦距;若目标在图像中的分辨率大于第一预设阈值,则确定调整方式之一为调远拍摄设备与其拍摄对象的距离或缩短拍摄设备的焦距。
结合第二方面,在本申请的一种实现方式中,效果参数包括目标的清晰度参数,处理模块,还用于:若目标的清晰度参数不在预设范围内,则确定调整方式之一为调整拍摄设备的焦距。
结合第二方面,在本申请的一种实现方式中,处理模块,还用于:根据调整方式调整拍摄设备和/或展示调整方式。
结合第二方面,在本申请的一种实现方式中,获取模块,还用于获取拍摄设备的参数信息;处理模块,还用于:将拍摄设备的参数信息、目标的效果参数以及图像合成为一张展示图像并进行展示。
第三方面,本申请提供一种摄像设备,包括包括镜头、传感器和处理器,其中:镜头用于接收光线,传感器用于对镜头接收的光线进行光电转换生成图像;处理器用于:实现如第一方面的方法。
附图说明
图1为本申请实施例的架构示意图;
图2为本申请实施例提供的拍摄设备的检测方法的流程图;
图3为本申请实施例中客户端的操作界面图;
图4为本申请实施例提供的诊断结果反馈界面图;
图5为本申请实施例提供的展示图像的示例图;
图6为本申请实施例提供的调整后的效果图;
图7为本申请实施例的流程示意图;
图8为本申请实施例的信令图;
图9为本申请实施例提供的一种拍摄设备的检测装置示意图;
图10为本申请实施例提供的一种摄像设备的示意图。
具体实施方式
本申请实施例提供了一种拍摄设备的检测方法及相关装置,能够检测智能摄像机拍摄的画面,自动调整摄像机或给出相关建议。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“对应于”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
随着技术的发展,传统摄像机逐渐被智能摄像机所取代,智能摄像机不再局限于原有的录像和普通告警功能,演变出了人脸检测、人脸识别、车辆检测和车牌识别等智能功能。智能摄像机的人脸识别和车牌识别需要清晰准确的抓拍图,这就对摄像机的架设安装有很严格的要求。而现有摄像机的安装大多是非技术研发的安装人员进行操作,且安装人员中的经验也参差不齐,这就导致摄像机的最终成像效果会有很大差别。虽然摄像机在出厂时也附带了架设安装说明指导,例如安装高度和安装角度,但是现场安装环境往往很复杂,不一定能够满足架设条件,从而导致人脸识别或车牌识别率低。
有鉴于此,本申请实施例提供了一种拍摄设备的检测方法及相关装置,能够检测智能摄像机拍摄的画面,自动调整摄像机或给出相关建议。为了下述各实施例的描述清楚简洁,首先给出本申请实施例的架构示意图:
图1为本申请实施例的架构示意图。该架构包括拍摄设备和客户端/服务器。其中,拍摄设备可以是任意能够拍摄图像的设备,例如固定摄像机、云台摄像机、监控摄像头等,本申请实施例对此不做限定。客户端可以是安装在终端设备上的客户端,拍摄设备可以通过有线连接或无线连接与终端设备通信,终端设备可以是手机、电脑、平板等设备,客户端可以是安装在手机、平板上的应用程序(application,APP),也可以是网页上的wed客户端,本申请实施例对此不做限定。实现本申请实施例提供的拍摄设备的检测方法时,其中一种实现方式为:用户点击客户端上的触发按钮,使得终端设备向拍摄设备发送触发信 号。然后,拍摄设备可以在获取图像后执行本申请实施例提供的拍摄设备的检测方法,并通过相应的接口将得到的调整方式传输给客户端,以使得客户端呈现该调整方式。用户可以直接根据呈现的调整方式进行拍摄设备的手动调整,也可以点击自动调整按钮,则终端设备会向拍摄设备发出自动调整指令,使得拍摄设备进行自动调整。例如,云台摄像机可以按照自动调整指令进行上下左右旋转等调整。在另一种实现方式中,用户点击客户端上的触发按钮,使得终端设备向拍摄设备发送触发信号。然后,拍摄设备可以将获取到的图像发送至客户端,以使得客户端执行本申请实施例提供的拍摄设备的检测方法,然后将得到的调整方式呈现给用户。后续情况与前一种实现方式类似,此处不再赘述。
在另一些实施例中,拍摄设备可以通过有线连接或无线连接与服务器通信,服务器可以与多个终端设备连接,多个终端设备可以访问服务器以得到拍摄设备的信息。实现本申请实施例提供的拍摄设备的检测方法时,其中一种实现方式为:拍摄设备可以在获取图像后执行本申请实施例提供的拍摄设备的检测方法,并通过相应的接口将得到的调整方式传输给服务器,服务器提供相应的接口,以使得用户可以通过终端设备查询到该调整方式。然后,用户可以根据查询到的调整方式对拍摄设备进行人工调整,也可以通过终端设备和服务器向拍摄设备发送自动调整指令,以使得拍摄设备进行自动调整。在另一种实现方式中,可以由服务器接收拍摄设备的图像数据,然后执行本申请实施例提供的拍摄设备的检测方法,得到相关的调整方式,其他情况与前一种实现方式类似,此处不再赘述。
综上所述,本申请实施例提供的拍摄设备的检测方法可以由拍摄设备、客户端或服务器执行,本申请实施例以拍摄设备为例进行描述,其他设备的执行情况类似,此处不再赘述。
图2为本申请实施例提供的拍摄设备的检测方法的流程图。该流程包括:
201、获取拍摄设备拍摄的图像;
在本申请实施例中,工作人员初步安装好拍摄设备后,连接电源,则拍摄设备可以拍摄到图像。在实际应用中,拍摄设备可以将拍摄到的图像的数据格式转换为适用于后续处理的格式。示例性的,拍摄设备可以将拍摄的图像转换为YUV数据格式。由于在实际应用中拍摄设备所采用的系统、算法、编程语言等不同,图像的数据格式可以采用其他合适的格式,本申请实施例对图像的数据格式不做限定。
图3为本申请实施例中客户端的操作界面图。该界面图中,右上侧为检测控制界面,第一栏可以选择图像检测的类型,即该拍摄设备主要用于检测人脸或是车牌。在实际应用中,该拍摄设备还可能用于检测其他类型的目标,则可在该栏中增加相应按钮。可以理解的是,当用户在该检测控制界面中选定了其中一个按钮后,步骤202中拍摄设备可以选用该选定按钮对应的目标检测算法来检测目标。因此,本申请实施例实际上可以分别针对人脸和车牌进行抓拍分析,也可以根据实际需要对其他类型的目标进行抓拍分析,大大拓展了本申请的应用场景。第二栏为检测区域列表。用户可以选定其中的“多边形区域绘制”、“矩形区域绘制”、“满屏绘制”或“清除绘制图形”。其中,用户点击“多边形区域绘制”可以将预设的多边形区域添加到检测区域中,用户点击“矩形区域绘制”可以将预设的矩形区域添加到检测区域中,用户点击“满屏绘制”可以将检测区域设定为全屏,用户点击 “清除绘制图形”可以取消上述所有的设定。第三栏是“手动抓拍”按钮或“开启连续调优”按钮。用户点击“手动抓拍”按钮则触发拍摄设备抓拍功能,拍摄设备执行步骤201抓拍图像,并且执行后续步骤得到相关调整方式呈现在右下侧的诊断结果反馈界面中。用户点击“开启连续调优”按钮,并选定每次拍摄图像之间的时间间隔,例如图3中的示例以300秒为时间间隔进行连续调优。则用户点击“开启连续调优”按钮后,可以使得拍摄设备每隔300秒执行步骤201抓拍图像,并且执行后续步骤得到相关调整方式,呈现在右下侧的诊断结果反馈界面中。
图3所示的界面图中,右下侧为诊断结果反馈界面,该界面用于显示检测到的信息以及相关的调整方式。图3中用户暂未点击“手动抓拍”按钮或“开启连续调优”按钮,因此诊断结果反馈界面暂未呈现诊断结果。图3所示的界面图中,左侧为拍摄设备拍摄的图像以及拍摄设备的调整按钮、模式选项和速度选项。本申请实施例对调整按钮的数量不做限定。该调整按钮可以是拍摄设备的上下转动按钮、左右转动按钮、焦距调节按钮等,本申请实施例对此不做限定。该模式选项包括连续模式、暂停模式等,用于控制拍摄设备进行连续拍摄还是暂停拍摄。若用户选用了连续模式,则用户可以在该界面上继续选择连续拍摄的速度。在实际应用中,该操作界面图还可能设置其他按钮,本申请实施例对此不做限定。
202、根据该图像,检测图像中的目标;
在本申请实施例中,拍摄设备可以通过目标检测算法将图像中的目标检测出来。其中,目标可以是人脸、车牌等物体,也可以是图案、数字等,本申请实施例对目标的具体类型不做限定。
拍摄设备根据不同类型的目标,可以配置不同的目标检测算法检测图像中的目标。示例性的,若需要检测图像中的人脸,则拍摄设备可以通过人脸检测算法检测图像中的人脸。若需要检测图像中的车牌,则拍摄设备可以通过车牌识别算法检测图像中的车牌。上述的人脸检测算法、车牌识别算法等目标检测算法可以采用神经网络模型实现,本申请实施例对具体采用何种目标检测算法不做限定。
图4为本申请实施例提供的诊断结果反馈界面图。该图左侧为拍摄设备拍摄到的图像。并且,拍摄设备执行步骤202后,检测到的目标在该图像中用矩形框选区域圈出。可以理解的是,拍摄设备可能在图像中检测出多个目标,则拍摄设备可以将这些目标均框选出来。后续步骤204中,拍摄设备可以随机选择其中一个目标来确定调整方式,也可以选择矩形框选区域最大的目标来确定调整方式,本申请实施例对此不做限定。
203、确定目标的效果参数;
在本申请实施例中,目标的效果参数可以包括该目标相对于拍摄设备的相对位置。该相对位置包括目标相对于拍摄设备的方向、位置等参数,本申请实施例对此不做限定。
示例性的,目标的效果参数为目标相对于拍摄设备的角度参数,例如欧拉角,包括俯仰角和/或偏航角和/或翻滚角。则拍摄设备可以通过角度检测算法来确定目标相对于拍摄设备的欧拉角。本申请实施例对采用何种角度检测算法不做限定。
可以理解的是,当拍摄设备从图像中检测到多个目标时,可以选择其中一个目标来计 算效果参数,并在步骤204中根据该目标的效果参数来确定调整方式,也可以计算所有目标的效果参数,然后再根据该效果参数选择其中一个目标来执行步骤204。在实际应用中,拍摄设备还可能通过其他选择方式选定目标,本申请实施例对此不做限定。
在一些实施例中,目标的效果参数还包括目标在该图像中的位置参数,该位置参数可以是目标(的中心)在该图像的坐标(包括横坐标参数和纵坐标参数)。拍摄设备可以以图像左下角端点为原点建立直角坐标系来确定目标在该图像中的位置参数。在实际应用中,拍摄设备还可能通过其他方式确定目标在该图像中的位置参数,本申请实施例对此不做限定。
在一些实施例中,目标的效果参数还包括目标的分辨率。目标的分辨率可以是指目标对应矩形区域的像素数量,示例性的,分辨率25px*25px是指横向和纵向排布25行/列像素,因此一共有625个像素。而在同一个图像中,像素的大小一般都相同的,因此目标的分辨率越大,该目标对应矩形区域便越大。因此根据目标的分辨率可以确定目标对应矩形区域的大小是否符合要求。
在一些实施例中,目标的效果参数包括目标的清晰度参数。目标的清晰度参数可以是指比特率等与清晰度有关的参数。拍摄设备可以从图像中读取到这些参数。
在一些实施例中,在步骤203之后,拍摄设备还可以获取自身的参数信息,例如拍摄设备的UID、名称、IP地址,并获取所拍摄图像的时间,然后将这些参数信息、目标的效果参数以及该图像合称为一张展示图像。图5为本申请实施例提供的展示图像的示例图。该展示图像中,顶部参数为拍摄设备的名称、IP地址、拍摄图像的时间以及目标的分辨率。拍摄设备可以提供相应的接口以使得客户端或服务器可以通过该接口获取到该展示图像并进行展示。
可以理解的是,拍摄设备获取到的拍摄设备参数信息、目标的效果参数均可以进行展示。用户可以通过终端设备从拍摄设备的相关接口获取到这些信息,终端设备可以将这些信息展示到显示屏上。如图4所示,该拍摄设备的名称为Camera 1,IP地址为X.X.X.X,拍摄该图像的时间为2020年1月1日11点11分,目标1的信息为:位置参数[x:1000y:1000],分辨率[25px*25px],目标2的信息为位置[x:100y:100],分辨率[100px*100px]。在实际应用中,拍摄设备还可能检测其他信息,以使得用户可以通过终端设备的显示屏观察到这些信息,本申请实施例对展示的信息数量和类型等不做限定。
204、若目标的效果参数不在预设范围内,则根据目标的效果参数与预设范围的比较结果确定调整方式。
在本申请实施例中,目标的效果参数不在预设范围内,说明该目标不满足拍摄的需求,安装人员需要进一步调整拍摄设备来取得更好地拍摄效果。而本申请实施例中,拍摄设备确定了目标的效果参数不在预设范围内后,可以根据目标的效果参数与预设范围的比较结果确定调整方式,以自动调整或提示安装人员调整。具体确定调整方式的过程如下:
一、目标的效果参数包括目标相对于拍摄设备的相对位置,该相对位置包括目标相对于拍摄设备的角度参数,角度参数包括俯仰角和/或偏航角和/或翻滚角,那么若角度参数中的俯仰角不在预设范围内,则拍摄设备确定调整方式之一为上下转动拍摄设备;若角度 参数中的偏航角不在预设范围内,则拍摄设备确定调整方式之一为左右转动拍摄设备;若角度参数中的翻滚角不在预设范围内,则拍摄设备确定调整方式之一为顺时针或逆时针转动拍摄设备。示例性的,图4中图像左下侧目标低下头,则拍摄设备检测到该目标的俯仰角不在预设范围内,确定调整方式之一为上下转动拍摄设备。进一步地,目标低下头,则调整方式为向上拍摄设备。在实际应用中,该调整方式可以用不同的文字表述,例如图4中的“请抬高摄像机,以减小俯仰角”,本申请实施例对具体的文字表述不做限定。
二、目标的效果参数包括目标相对于拍摄设备的相对位置,该相对位置包括目标在图像中的位置参数,位置参数为目标在图像的坐标,那么若位置参数中的横坐标参数不在预设范围内,则确定调整方式之一为横向平移拍摄设备;若位置参数中的纵坐标参数不在预设范围内,则确定调整方式之一为纵向平移拍摄设备。示例性的,图4中图像左下侧目标比较靠左,因此拍摄设备检测到该目标的的横坐标参数不在预设范围内,则确定调整方式之一为横向平移拍摄设备。具体地,调整方式为“请向左平移摄像机,以使目标位于图像中央”。
三、目标的包括目标的分辨率,那么若目标的分辨率小于第一预设阈值,则确定调整方式之一为拉近拍摄设备与其拍摄对象的距离或加长拍摄设备的焦距;若目标的分辨率大于第一预设阈值,则确定调整方式之一为调远拍摄设备与其拍摄对象的距离或缩短拍摄设备的焦距。可以理解的是,当拍摄设备具备调整焦距的功能时,一般可以采用调整焦距的方式进行调整。当拍摄设备为固定焦距的设备时,则可以确定调整距离的方式进行调整。示例性的,图4中目标的分辨率小于第一预设阈值,说明目标对应矩形区域不够大,需要拉近焦距(加长拍摄设备的焦距)。
四、目标的效果参数包括目标的清晰度参数,那么若目标的清晰度参数不在预设范围内,则确定调整方式之一为调整拍摄设备的焦距。示例性的,图4中目标图像的清晰度足够,则无需调整。
在一些实施例中,步骤204之后,拍摄设备可以根据步骤204得到的调整方式调整拍摄设备和/或展示该调整方式。其中,用户可以在如图4所示的操作界面上点击“自动调整”按钮,以触发拍摄设备根据步骤204得到的调整方式调整拍摄设备的流程。当拍摄设备接收到该“自动调整”按钮相关的触发信号时,拍摄设备可以根据步骤204得到的调整方式自动调整。可以理解的是,一般的摄像机能够调整焦距,而云台摄像机不仅能够调整焦距,还能转动摄像机。在实际应用中,拍摄设备可以根据自身的设备配置,确定相关的调整功能,以匹配步骤204得到的调整方式。本申请实施例拍摄设备的设备配置不做限定。该“一键调整功能”可以进一步减少用户的操作成本,提高系统的易用性。
在一些实施例中,拍摄设备可以展示步骤204得到的调整方式,如图4所示。本申请实施例将调整方式转换成通俗语言提示用户如何进行调整,使得安装人员能够更好地进行人工调整。该展示方式对用户有更好的参考性,提高用户使用的简易程度。
图6为本申请实施例提供的调整后的效果图。当拍摄设备自动调整,或安装人员依照调整建议进行调整后,拍摄设备能够拍摄到比较合适的目标图像,得到较好的成像效果,让人脸识别或车牌识别更加准确。
图7为本申请实施例的流程示意图。在本申请实施例中,当用户触发抓拍(例如用户点击如图3的“手动抓拍”按钮)的时刻,拍摄设备会从实况视频流中获取离抓拍时刻最近的图像帧进行预处理,转换成算法能够处理的图像信息。然后拍摄设备可以通过算法模型提取图像信息中的特征信息,然后通过特征信息与原有的数据模型进行拟合,给出算法检测结果,例如包括图像信息中是否存在目标以及检测到的目标的相对位置,最后整合成算法的输出结果。算法给出检测结果之后,拍摄设备中的业务处理模块会将算法检测结果转换成调整建议反馈给用户。用户可以根据该调整建议进行手动调整或指示拍摄设备进行自动调整。
图8为本申请实施例的信令图。其中,客户端(Client)可以是web客户端,可以是摄像机上自带的客户端。用户可以通过终端设备访问该web客户端,该web客户端可以与摄像机通信,获取摄像机的信息以及向摄像机发送指令。该客户端上可以显示视频预览界面(如图3)、触发操作开关(例如“手动抓拍”按钮)和结果展示(如图4)。客户端向核心管理模块(ImgEvalChain模块)发送的request主要用于通知摄像机服务端对当前的图像进行判定或调整,request中主要包含判定类型(如图3对应实施例中的人脸检测或车牌检测),判定模式(如图3对应实施例中的“手动抓拍”按钮和“开启连续调优”按钮分别对应两种模式)等基础信息。
YuvService代表摄像机服务端的实况视频流数据处理模块,判定过程中使用的图像数据从此模块得到;此模块负责图像处理和视频编解码,将图像转换成算法能够处理的数据。
ImgEvalChain是摄像机服务端的核心管理模块,当接受到Client端发送的request请求后,ImgEvalChain模块会从YuvService模块获取图像数据,并将获取到的数据处理后送至AlgProcess模块进行处理;AlgProcess模块的处理结果会和YuvService的图像数据封装成request请求发送给ImgEncode模块,ImgEncode模块处理完请求后返回respond,respond中主要包含经过ImgEncode模块处理后的诊断图片,最后ImgEvalChain将算法结果通过自然语义转换(即上述图2对应的实施例中步骤204)确定调整方式,生成调优建议,并与诊断图片一起封装后返回给Client端。当Client端获取到诊断结果后,可以发送AutoAdjust的request请求,通知摄像机进行自动调整。
AlgProcess是本发明中技术核心模块,AlgProcess模块的处理也是在摄像机测完成,主要负责处理图像中的数据,进行目标检测和目标评分(包含但不限于像素大小、目标位置、目标角度、清晰度),提供最基本的算法结果。
ImgEncode是摄像机服务端负责诊断结果可视化处理的模块,该模块会将设备基本信息、抓拍图像、算法输出结果(即目标的效果参数)整合成一张图片(例如前述图5所示的图片),客户端通过该图片直观的给出结果,让用户得到反馈。
综上所述,本申请实施例通过算法对场景中的目标进行抓拍分析,给出分析结果,可以量化工勘时架设角度和高度的好坏,减少人为因素的主观判断。并且,本申请实施例可以同时针对人脸和车牌进行抓拍分析,大大拓展了本申请的应用场景。
图9为本申请实施例提供的一种拍摄设备的检测装置示意图。该装置900包括:
获取模块901,用于执行上述图2对应的各个实施例中的步骤201;
处理模块902,用于执行上述图2对应的各个实施例中的步骤202、步骤203、步骤204。
图10为本申请实施例提供的一种摄像设备(例如摄像机)的示意图。该摄像设备1000可以是上述图1对应的架构中的拍摄设备,该摄像设备1000包括镜头1001、传感器1002和处理器1003;其中,镜头1001用于接收光线,传感器1002用于对所述镜头接收的光线进行光电转换生成图像;处理器1003用于:实现如图2所示的方法实施例的方法。
镜头1001的作用是把被观察目标的光像呈现在摄像机的传感器上,也称光学成像。镜头1001通过将各种不同形状、不同介质(塑料、玻璃或晶体)的光学零件(反射镜、透射镜、棱镜)按一定方式组合起来,使得光线经过这些光学零件的透射或反射以后,按照人们的需要改变光线的传输方向而被接收器件接收,完成物体的光学成像过程。一般来说每个镜头1001都由多组不同曲面曲率的透镜按不同间距组合而成。间距和镜片曲率、透光系数等指标的选择决定了该镜头的焦距。镜头1001主要的参数指标包括:有效焦距、光圈、最大像面、视场角、畸变、相对照度等,各项指标数值决定了镜头1001的综合性能。
传感器1002(又称图像传感器),是一种将光学影像转换成电子信号的器件,广泛应用在数码相机和其他电子光学设备中。常见的传感器1002包括:感光耦合元件(charge-coupled device,CCD)和互补式金属氧化物半导体(complementary MOS,CMOS)。CCD和CMOS均拥有大量(例如数千万)的感光二极管(photodiode),每个感光二极管称为一个感光基元,每个感光基元对应一个像素。曝光时,该感光二极管在接受光线照射之后,把光信号转化成包含了亮度(或者亮度与颜色)的电信号,影像就随之被重新构建起来。拜尔(Bayer)阵列是一种常见的图像传感器技术,可以应用于CCD和CMOS中,拜耳阵列使用拜尔滤色镜让不同的像素点只对红、蓝、绿三原色光中的其中一种感光,这些像素交织在一起,然后通过去马赛克(demosaicing)内插来恢复原始影像。拜耳阵列可以应用于CCD或者CMOS中,应用了拜耳阵列的传感器又称为拜耳传感器。除了拜耳传感器之外,还有X3(Foveon公司开发)等传感器技术,X3技术采用三层感光元件,每层记录RGB的其中一个颜色通道,因此可以在一个像素上捕捉全部色彩的图像传感器。
处理器1003(又称图像处理器)可以是多个芯片的集合,也可以是单个芯片。处理器1003例如是片上系统(SoC)。处理器1003中可以包括图像处理器(ISP),处理器用于把传感器产生的图像转换成三通道格式(例如YUV)、改善图像质量,以及检测图像中是否有目标对象,还可以用于对图像进行编码。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例的技术方案的范围。

Claims (19)

  1. 一种拍摄设备的检测方法,其特征在于,包括:
    获取拍摄设备拍摄的图像;
    根据所述图像,检测所述图像中的目标;
    确定所述目标的效果参数,所述效果参数包括所述目标相对于所述拍摄设备的相对位置;
    若所述目标的效果参数不在预设范围内,则根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式。
  2. 根据权利要求1所述的方法,其特征在于,所述目标相对于所述拍摄设备的相对位置包括所述目标相对于所述拍摄设备的角度参数,所述角度参数包括俯仰角和/或偏航角和/或翻滚角。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式包括:
    若所述角度参数中的俯仰角不在所述预设范围内,则确定所述调整方式之一为向上或向下转动所述拍摄设备;
    若所述角度参数中的偏航角不在所述预设范围内,则确定所述调整方式之一为向左或向右转动所述拍摄设备;
    若所述角度参数中的翻滚角不在所述预设范围内,则确定所述调整方式之一为顺时针或逆时针转动所述拍摄设备。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,所述目标相对于所述拍摄设备的相对位置包括所述目标在所述图像中的位置参数,所述位置参数为所述目标在所述图像的坐标。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式包括:
    若所述位置参数中的横坐标参数不在所述预设范围内,则确定所述调整方式之一为横向平移所述拍摄设备;
    若所述位置参数中的纵坐标参数不在所述预设范围内,则确定所述调整方式之一为纵向平移所述拍摄设备。
  6. 根据权利要求1所述的方法,其特征在于,所述效果参数包括所述目标的分辨率,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式包括:
    若所述目标在所述图像中的分辨率小于第一预设阈值,则确定所述调整方式之一为拉近所述拍摄设备与其拍摄对象的距离或加长所述拍摄设备的焦距;
    若所述目标在所述图像中的分辨率大于第一预设阈值,则确定所述调整方式之一为调远所述拍摄设备与其拍摄对象的距离或缩短所述拍摄设备的焦距。
  7. 根据权利要求1至6任意一项所述的方法,其特征在于,所述效果参数包括所述目标的清晰度参数,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备 的相对位置的调整方式包括:
    若所述目标的清晰度参数不在所述预设范围内,则确定所述调整方式之一为调整所述拍摄设备的焦距。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式之后,所述方法还包括:
    根据所述调整方式调整所述拍摄设备和/或展示所述调整方式。
  9. 根据权利要求1至8任意一项所述的方法,其特征在于,所述确定所述目标的效果参数之后,所述根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式之前,所述方法还包括:
    获取所述拍摄设备的参数信息;
    将所述拍摄设备的参数信息、所述目标的效果参数以及所述图像合成为一张展示图像并进行展示。
  10. 一种拍摄设备的检测装置,其特征在于,包括:
    获取模块,用于获取图像;
    处理模块,用于根据所述图像,检测所述图像中的目标;
    所述处理模块,还用于确定所述目标的效果参数,所述效果参数包括所述目标相对于所述拍摄设备的相对位置;
    若所述目标的效果参数在预设范围内,则所述处理模块还用于根据所述目标的效果参数与预设范围的比较结果确定所述拍摄设备的相对位置的调整方式。
  11. 根据权利要求10所述的装置,其特征在于,所述目标相对于所述拍摄设备的相对位置包括所述目标相对于所述拍摄设备的角度参数,所述角度参数包括俯仰角和/或偏航角和/或翻滚角。
  12. 根据权利要求11所述的装置,其特征在于,所述处理模块还用于:
    若所述角度参数中的俯仰角不在所述预设范围内,则确定所述调整方式之一为向上或向下转动所述拍摄设备;
    若所述角度参数中的偏航角不在所述预设范围内,则确定所述调整方式之一为向左或向右转动所述拍摄设备;
    若所述角度参数中的翻滚角不在所述预设范围内,则确定所述调整方式之一为顺时针或逆时针转动所述拍摄设备。
  13. 根据权利要求10至12任意一项所述的装置,其特征在于,所述目标相对于所述拍摄设备的相对位置包括所述目标在所述图像中的位置参数,所述位置参数为所述目标中心在所述图像的坐标。
  14. 根据权利要求13所述的装置,其特征在于,所述处理模块还用于:
    若所述位置参数中的横坐标参数不在所述预设范围内,则确定所述调整方式之一为横向平移所述拍摄设备;
    若所述位置参数中的纵坐标参数不在所述预设范围内,则确定所述调整方式之一为纵 向平移所述拍摄设备。
  15. 根据权利要求10至14任意一项所述的装置,其特征在于,所述效果参数包括所述目标的分辨率,所述处理模块,还用于:
    若所述目标在所述图像中的分辨率小于第一预设阈值,则确定所述调整方式之一为拉近所述拍摄设备与其拍摄对象的距离或加长所述拍摄设备的焦距;
    若所述目标在所述图像中的分辨率大于第一预设阈值,则确定所述调整方式之一为调远所述拍摄设备与其拍摄对象的距离或缩短所述拍摄设备的焦距。
  16. 根据权利要求10至15任意一项所述的装置,其特征在于,所述效果参数包括所述目标的清晰度参数,所述处理模块,还用于:
    若所述目标的清晰度参数不在所述预设范围内,则确定所述调整方式之一为调整所述拍摄设备的焦距。
  17. 根据权利要求10至16任意一项所述的装置,其特征在于,所述处理模块,还用于:
    根据所述调整方式调整所述拍摄设备和/或展示所述调整方式。
  18. 根据权利要求10至17任意一项所述的装置,其特征在于,
    所述获取模块,还用于获取所述拍摄设备的参数信息;
    所述处理模块,还用于:将所述拍摄设备的参数信息、所述目标的效果参数以及所述图像合成为一张展示图像并进行展示。
  19. 一种摄像设备,其特征在于,包括镜头、传感器和处理器,其中:
    所述镜头用于接收光线,
    所述传感器用于对所述镜头接收的光线进行光电转换生成图像;
    所述处理器用于:实现如权利要求1至9任意一项所述的方法。
PCT/CN2021/093015 2020-09-22 2021-05-11 一种拍摄设备的检测方法及相关装置 WO2022062421A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011003383 2020-09-22
CN202011003383.X 2020-09-22
CN202110016897.7A CN114257732A (zh) 2020-09-22 2021-01-05 一种拍摄设备的检测方法及相关装置
CN202110016897.7 2021-01-05

Publications (1)

Publication Number Publication Date
WO2022062421A1 true WO2022062421A1 (zh) 2022-03-31

Family

ID=80790849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093015 WO2022062421A1 (zh) 2020-09-22 2021-05-11 一种拍摄设备的检测方法及相关装置

Country Status (2)

Country Link
CN (1) CN114257732A (zh)
WO (1) WO2022062421A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013201793A (ja) * 2013-07-11 2013-10-03 Nikon Corp 撮像装置
CN104735355A (zh) * 2015-03-13 2015-06-24 广东欧珀移动通信有限公司 一种智能终端的摄像方法及装置
CN104917959A (zh) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 一种拍照方法及终端
CN107465869A (zh) * 2017-07-27 2017-12-12 努比亚技术有限公司 一种焦距调节方法及终端
CN110719406A (zh) * 2019-10-15 2020-01-21 腾讯科技(深圳)有限公司 拍摄处理方法、拍摄设备及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013201793A (ja) * 2013-07-11 2013-10-03 Nikon Corp 撮像装置
CN104735355A (zh) * 2015-03-13 2015-06-24 广东欧珀移动通信有限公司 一种智能终端的摄像方法及装置
CN104917959A (zh) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 一种拍照方法及终端
CN107465869A (zh) * 2017-07-27 2017-12-12 努比亚技术有限公司 一种焦距调节方法及终端
CN110719406A (zh) * 2019-10-15 2020-01-21 腾讯科技(深圳)有限公司 拍摄处理方法、拍摄设备及计算机设备

Also Published As

Publication number Publication date
CN114257732A (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
CN108419023B (zh) 一种生成高动态范围图像的方法以及相关设备
EP1431912B1 (en) Method and system for determining an area of importance in an archival image
JP6732902B2 (ja) 撮像装置および撮像システム
US7705908B2 (en) Imaging method and system for determining camera operating parameter
US6801719B1 (en) Camera using beam splitter with micro-lens image amplification
US8446422B2 (en) Image display apparatus, image display method, program, and record medium
US20100141770A1 (en) Imaging apparatus and imaging method
JP4614653B2 (ja) 監視装置
EP2587407B1 (en) Vision recognition apparatus and method
TW200903792A (en) Image sensor
CN210725096U (zh) 影像系统和移动终端
JP2022189835A (ja) 撮像装置
US20040119104A1 (en) Imaging system having extended useful latitude
CN113965687A (zh) 拍摄方法、装置和电子设备
WO2022062421A1 (zh) 一种拍摄设备的检测方法及相关装置
CN110930340B (zh) 一种图像处理方法及装置
US20220408013A1 (en) DNN Assisted Object Detection and Image Optimization
JP2004228711A (ja) 監視装置及び方法、プログラム並びに監視システム
TWI448976B (zh) 超廣角影像處理方法與其系統
CN111800605A (zh) 基于枪球联动的车形、车牌传输的方法及系统、设备
WO2023245391A1 (zh) 一种相机预览的方法及其装置
JP2004007213A (ja) ディジタル3次元モデル撮像機器
JP7447947B2 (ja) 電子機器
JP7571067B2 (ja) 画像処理装置、画像処理方法、及びコンピュータプログラム
JP2008211756A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21870808

Country of ref document: EP

Kind code of ref document: A1