CN113286086A - Camera use control method and device, electronic equipment and storage medium - Google Patents

Camera use control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113286086A
CN113286086A CN202110578154.9A CN202110578154A CN113286086A CN 113286086 A CN113286086 A CN 113286086A CN 202110578154 A CN202110578154 A CN 202110578154A CN 113286086 A CN113286086 A CN 113286086A
Authority
CN
China
Prior art keywords
camera
image
image acquisition
target object
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110578154.9A
Other languages
Chinese (zh)
Other versions
CN113286086B (en
Inventor
王小刚
余程鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202110578154.9A priority Critical patent/CN113286086B/en
Publication of CN113286086A publication Critical patent/CN113286086A/en
Application granted granted Critical
Publication of CN113286086B publication Critical patent/CN113286086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a camera use control method, a camera use control device, electronic equipment and a storage medium, and belongs to the technical field of vehicle operation. Therefore, the situation that the user adjusts the first camera to the position where the effective area cannot be acquired can be effectively found, and the use of the first camera by the user can be normalized.

Description

Camera use control method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of vehicle operation technologies, and in particular, to a method and an apparatus for controlling use of a camera, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, the industry of traditional transportation and internet convergence is developed vigorously, so that the network car-booking service (called network car booking for short) becomes an important way for users to go out.
The network car booking operator installs a Driver Monitor System (DMS) camera on the vehicle in order to sort the irregular driving behaviors, and punishs the irregular behaviors by means of an in-vehicle image acquired by the DMS camera. However, because the users have different heights and sizes, the network booking operator can only limit the orientation of the DMS camera and needs to allow the user to adjust the image acquisition range of the DMS camera in the orientation, so that the situation that the user adjusts the camera to an area where the user cannot acquire an effective region such as a human face easily occurs, and the DMS camera cannot play a role and has the same shape as a virtual one at this time.
Disclosure of Invention
The embodiment of the application provides a camera use control method and device, electronic equipment and a storage medium, and aims to solve the problem that whether the use of a camera by a user is standard or not is difficult to detect in the related art.
In a first aspect, an embodiment of the present application provides a method for controlling use of a camera, including:
acquiring a first vehicle interior image acquired by a first camera and a second vehicle interior image acquired by a second camera, wherein the orientation of the first camera is fixed, the image acquisition range in the orientation is adjustable, and the orientation and the image acquisition range of the second camera are both fixed;
acquiring position information of a target object in a specified area from the first in-vehicle image and the second in-vehicle image respectively;
performing conversion processing on position information acquired from the second in-vehicle image based on a position conversion relationship between the second camera and the image of the same object acquired by the first camera when the first camera performs image acquisition in a specified image acquisition range;
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range based on the converted position information and the position information acquired from the first in-vehicle image, wherein the allowable image acquisition range is predetermined according to the specified image acquisition range.
In some possible embodiments, the determining whether the actual image capturing range of the first camera exceeds the allowable image capturing range based on the converted position information and the position information acquired from the first in-vehicle image includes:
determining a first standard value based on the position information of the key points of the target object after conversion and the position information of the key points extracted from the first in-vehicle image, wherein the first standard value is used for representing the coincidence degree between the key points of the target object after conversion and the key points extracted from the first in-vehicle image;
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range or not according to the first standard value.
In some possible embodiments, determining the first specification value based on the converted position information of the key point of the target object and the position information of the key point extracted from the first in-vehicle image includes:
determining the distance between the key points based on the converted position information of each key point of the target object and the position information of the corresponding key point extracted from the first in-vehicle image;
based on the distances between the keypoints, a first specification value is determined.
In some possible embodiments, the first norm value sim1 is determined according to the following formula:
Figure BDA0003085218460000021
Figure BDA0003085218460000031
wherein d isiThe distance between the key points i is shown, n is the number of key points, and r is the set radius.
In some possible embodiments, the location information further includes location area information of the target object, further including:
determining a second standard value based on the position area information of the target object after conversion and the position area information extracted from the first in-vehicle image, wherein the second standard value is used for representing the coincidence degree between the position area of the target object after conversion and the position area extracted from the first in-vehicle image;
determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first specification value, wherein the determining comprises the following steps:
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range or not according to the first standard value and the second standard value.
In some possible embodiments, the second norm value sim2 is determined according to the following formula:
Figure BDA0003085218460000032
Figure BDA0003085218460000033
wherein A represents the location Area of the target object after conversion, AreaARepresenting the Area of the position Area of the target object after conversion, B representing the position Area of the target object in the first in-vehicle image, AreaBAn area representing a location area of the target object in the first in-vehicle image.
In some possible embodiments, determining whether the actual image capturing range of the first camera exceeds an allowable image capturing range according to the first specification value and the second specification value includes:
and when the average value of the first standard value and the second standard value is larger than a preset value, determining that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range.
In some possible embodiments, the method further comprises:
detecting the movement speed of the vehicle;
and when the detected movement speed is greater than a set speed, acquiring the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera.
In some possible embodiments, the method further comprises:
determining whether the first camera is blocked or not based on the position information acquisition result of the first in-vehicle image; and/or
And determining whether the second camera is blocked or not based on the position information acquisition result of the second in-vehicle image.
In a second aspect, an embodiment of the present application provides a use control device for a camera, including:
the system comprises an image acquisition unit, a first vehicle interior image acquisition unit and a second vehicle interior image acquisition unit, wherein the first vehicle interior image acquisition unit is used for acquiring a first vehicle interior image acquired by a first camera and a second vehicle interior image acquired by a second camera, the first camera is fixed in orientation, the image acquisition range in the orientation is adjustable, and the orientation and the image acquisition range of the second camera are both fixed;
a position acquisition unit configured to acquire position information of a target object in a specified area from the first in-vehicle image and the second in-vehicle image, respectively;
the conversion unit is used for converting the position information acquired from the second in-vehicle image based on the position conversion relation between the images of the same object acquired by the second camera and the first camera when the first camera acquires the image in the designated image acquisition range;
and the determining unit is used for determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range based on the converted position information and the position information acquired from the first in-vehicle image, wherein the allowable image acquisition range is predetermined according to the specified image acquisition range.
In some possible embodiments, the location information at least includes location information of a key point of the target object, and the determining unit specifically includes:
a first specification subunit, configured to determine a first specification value based on the converted location information of the key point of the target object and the location information of the key point extracted from the first in-vehicle image, where the first specification value is used to represent a degree of coincidence between the converted key point of the target object and the key point extracted from the first in-vehicle image;
and the judging subunit is used for determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first standard value.
In some possible embodiments, the first specification subunit is specifically configured to:
determining the distance between the key points based on the converted position information of each key point of the target object and the position information of the corresponding key point extracted from the first in-vehicle image;
based on the distances between the keypoints, a first specification value is determined.
In some possible embodiments, the first specification subunit is specifically configured to determine the first specification value sim1 according to the following formula:
Figure BDA0003085218460000051
Figure BDA0003085218460000052
wherein d isiThe distance between the key points i is shown, n is the number of key points, and r is the set radius.
In some possible embodiments, the location information further includes location area information of the target object, and the determining unit further includes:
a second specification subunit, configured to determine a second specification value based on the converted position area information of the target object and the position area information extracted from the first in-vehicle image, where the second specification value is used to represent a degree of coincidence between the converted position area of the target object and the position area extracted from the first in-vehicle image;
the judging subunit is further configured to determine whether an actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first specification value and the second specification value.
In some possible embodiments, the second specification subunit is specifically configured to determine the second specification value sim2 according to the following formula:
Figure BDA0003085218460000053
Figure BDA0003085218460000054
wherein A represents the location Area of the target object after conversion, AreaARepresenting the Area of the position Area of the target object after conversion, B representing the position Area of the target object in the first in-vehicle image, AreaBAn area representing a location area of the target object in the first in-vehicle image.
In some possible embodiments, the determining subunit is specifically configured to:
and when the average value of the first standard value and the second standard value is larger than a preset value, determining that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range.
In some possible embodiments, the method further comprises:
a detection unit for detecting a moving speed of the vehicle;
the image acquisition unit is further configured to acquire the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera when the detected movement speed is greater than a set speed.
In some possible embodiments, the device further includes an occlusion determining unit, configured to:
determining whether the first camera is blocked or not based on the position information acquisition result of the first in-vehicle image; and/or
And determining whether the second camera is blocked or not based on the position information acquisition result of the second in-vehicle image.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the camera usage control method.
In a fourth aspect, embodiments of the present application provide a storage medium, where when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is capable of executing the usage control method of the camera.
In the embodiment of the application, the orientation of the first camera is fixed, the image acquisition range in the orientation is adjustable, the orientation and the image acquisition range of the second camera are both fixed, the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera are acquired, the position information of the target object in the specified area is acquired from the first in-vehicle image and the second in-vehicle image respectively, the position information acquired from the second in-vehicle image is converted based on the position conversion relationship between the images of the same object acquired by the second camera and the first camera when the first camera acquires the image in the specified image acquisition range, and whether the actual image acquisition range of the first camera exceeds the allowable image acquisition range is determined based on the converted position information and the position information acquired from the first in-vehicle image, wherein, the allowable image capturing range is predetermined according to the specified image capturing range. Therefore, by means of the position conversion relation between the images of the same object acquired by the second camera and the first camera when the first camera designates the image acquisition range to acquire the images, whether the actual image acquisition range of the first camera exceeds the allowable image acquisition range or not is judged, namely whether the use of the first camera by a user is in compliance or not is judged, the condition that the user adjusts the first camera to the area which cannot be acquired effectively can be effectively found, and the use of the first camera by the user can be standardized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a scene schematic diagram of a usage control method of a camera according to an embodiment of the present application;
fig. 2 is a flowchart of a usage control method of a camera according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a usage control method for another camera provided in the embodiment of the present application;
FIG. 4 is a flow chart for determining a first specification value according to an embodiment of the present application;
fig. 5 is a flowchart of a usage control method for another camera provided in the embodiment of the present application;
fig. 6 is a schematic flow chart of usage control of a camera according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a process for determining a homography matrix H according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a process for determining whether a user's usage of a DMS camera is compliant according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a use control device of a camera according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device for implementing a usage control method of a camera according to an embodiment of the present application.
Detailed Description
In order to solve the problem that whether the use of a camera by a user is standard or not is difficult to detect in the related art, embodiments of the present application provide a method and an apparatus for controlling the use of a camera, an electronic device, and a storage medium.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a schematic view of a scene of a method for controlling a camera according to an embodiment of the present disclosure, including a first camera and a second camera mounted on a vehicle, where an orientation of the first camera is fixed, an image capture range of the first camera in the corresponding orientation is adjustable, and an orientation and an image capture range of the second camera are both fixed, where the first camera may be a dms (driver Monitor system) camera for capturing an in-vehicle image of a driving location area in real time to Monitor a driving behavior of a user, and the second camera may be a Digital Video Recorder (DVR) camera for capturing an in-vehicle image of the driving location area and a peripheral area in real time to Monitor an in-vehicle condition. That is, the image capturing ranges of the first camera and the second camera cover the driving seat area.
Fig. 2 is a flowchart of a usage control method of a camera according to an embodiment of the present application, including the following steps.
In step S201, a first in-vehicle image captured by the first camera and a second in-vehicle image captured by the second camera are acquired.
The orientation of the first camera is fixed, the image acquisition range in the orientation is adjustable, and the orientation and the image acquisition range of the second camera are fixed.
During specific implementation, the smaller the image acquisition time difference between the first vehicle interior image and the second vehicle interior image is, the higher the accuracy of judging whether the actual image acquisition range of the first camera (i.e. the image acquisition range of the first camera when acquiring the first image) exceeds the allowable image acquisition range is, i.e. the more accurate the use compliance judgment of the first camera is, so that the image acquisition time difference between the first vehicle interior image and the second vehicle interior image can be required to be smaller than the preset time difference, so as to ensure the accuracy of the compliance judgment.
In step S202, position information of the target object in the designated area is acquired from the first in-vehicle image and the second in-vehicle image, respectively.
Wherein the position information of the target object in the designated area is the position information of the face in the driving position area.
In particular implementation, the following may be performed for each of the first in-vehicle image and the second in-vehicle image: and searching a driving position area in the image, carrying out face detection on the searched driving position area, and carrying out face key point detection based on the detected face area so as to obtain the position information of the face key point in the image.
In step S203, the conversion processing is performed on the position information acquired from the second in-vehicle image based on the position conversion relationship between the images of the same object captured by the second camera and the first camera when the first camera performs image capturing in the specified image capturing range.
In practical application, the first camera is mainly used for acquiring an in-vehicle image of a driving seat area, and the difference of body shape characteristics such as height and body shape of different users is large, so that the first camera can be suitable for users with different body shape characteristics, and a network appointment vehicle operator can limit the orientation of the DMS camera and allow the user to adjust the image acquisition range of the DMS camera in the orientation.
Considering that the body shape features of most users have commonality, for example, the height of most users is about 175 cm, the body shape is mostly medium, and the like, the specified image capture range of the first camera can be determined based on the common body shape features, and then, the position conversion relationship between the images of the same object captured by the second camera and the first camera when the first camera performs image capture in the specified image capture range is determined, wherein the position conversion relationship is the coordinate conversion relationship when the image of the object captured by the second camera is converted into the image of the object captured by the first camera when the first camera and the second camera perform image capture on the same object.
For example, an image 1 obtained when the first camera performs image acquisition in a specified image acquisition range and an image 2 acquired by the second camera may be acquired, and the following is performed on the image 1 and the image 2: the method comprises the steps of carrying out gray processing on an image to obtain a gray image, delimiting a driving position area in the gray image, carrying out feature point extraction on the driving position of the delimited driving position area, wherein the feature points can be predetermined by combining the shape and the color of the driving position, and then describing each feature point by using a preset algorithm such as a BRIEF algorithm to obtain a binary feature description vector.
Further, feature points in the image 1 and the image 2 are matched based on the feature description vector of each feature point in the image 1 and the feature description vector of each feature point in the image 2, and then, the position conversion relationship between the images of the same object acquired by the second camera and the first camera when the first camera performs image acquisition in the designated image acquisition range is calculated based on the feature description vectors of the matched feature points in the image 1 and the image 2.
In specific implementation, based on the position conversion relationship between the second camera and the image of the same object acquired by the first camera when the first camera acquires the image in the specified image acquisition range, the position information acquired from the second in-vehicle image is converted, that is, the position information of the target object in the second image is converted into the position information of the target object when the first camera acquires the first image in the specified image acquisition range, so that whether the actual image acquisition range of the first camera exceeds the allowable image acquisition range can be subsequently judged based on the converted position information of the target object and the position information of the target object acquired from the first in-vehicle image.
In step S204, it is determined whether the actual image capturing range of the first camera exceeds the allowable image capturing range based on the converted position information and the position information acquired from the first in-vehicle image.
The allowable image capturing range may be predetermined according to the designated image capturing range, for example, the designated image capturing range is entirely enlarged by a preset multiple, for example, 1.2 times, to be used as the allowable image capturing range, and the designated image capturing range is vertically stretched by the preset multiple, for example, 1.2 times, to be used as the allowable image capturing range.
Based on the above analysis, the converted position information indicates the position information of the target object when the first camera captures the first image in the designated image capture range, and the position information acquired from the first in-vehicle image indicates the position information of the target object when the first camera captures the first image in the current image capture range, so that the converted position information and the position information acquired from the first in-vehicle image are compared to determine whether the actual image capture range of the first camera exceeds the allowable image capture range.
In the embodiment of the application, by means of the position conversion relation between the images of the same object collected by the second camera and the first camera when the first camera is used for collecting the images in the appointed image collection range, whether the use of the first camera by a user is in compliance is judged, the condition that the first camera is adjusted to a region where the collection of the first camera cannot be achieved by the user can be effectively found, and the use of the first camera by the user is facilitated to be standardized.
Fig. 3 is a flowchart of a method for controlling use of a camera according to an embodiment of the present application, including the following steps.
In step S301, a first in-vehicle image captured by a first camera and a second in-vehicle image captured by a second camera are acquired.
In step S302, position information of the target object in the designated area is acquired from the first in-vehicle image and the second in-vehicle image, respectively, wherein the position information at least includes position information of a key point of the target object.
In step S303, the conversion processing is performed on the position information of the target object acquired from the second in-vehicle image based on the position conversion relationship between the images of the same object acquired by the second camera and the first camera when the first camera performs image acquisition in the specified image acquisition range.
In step S304, a first specification value is determined based on the position information of the key points of the target object after the conversion and the position information of the key points of the target object extracted from the first in-vehicle image.
The first standard value is used for representing the coincidence degree between the key points of the converted target object and the key points of the target object extracted from the first in-vehicle image.
In a specific implementation, the first specification value may be determined according to a process shown in fig. 4, where the process includes the following steps:
in step S401a, the distance between the key points is determined based on the position information of each key point of the converted target object and the position information of the corresponding key point extracted from the first in-vehicle image.
Assuming that the position information of the ith key point of the target object after conversion is (HBxi, HByi), and the position information of the ith key point extracted from the first in-vehicle image is (Cxi, Cyi), the distance d between the key points iiComprises the following steps:
Figure BDA0003085218460000111
in step S402a, a first specification value is determined based on the distance between the keypoints.
For example, the first norm value sim1 is determined according to the following formula:
Figure BDA0003085218460000112
Figure BDA0003085218460000121
wherein n represents the number of key points, and r is a set radius.
According to the above formula, the smaller the distance between the key points, the larger sim1 is, and the distance between the key points can represent the distance difference between the key point of the current target object and the key point of the target object when the key point is located at the specified position, so sim1 can represent the coincidence degree between the key points of the target object, wherein the specified position is located within the specified image acquisition range.
In step S305, it is determined whether the actual image capturing range of the first camera exceeds the allowable image capturing range according to the first specification value.
For example, when the first specification value is greater than the set value, it is determined that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range, and when the first specification value is not greater than the set value, it is determined that the actual image acquisition range of the first camera exceeds the allowable image acquisition range.
In the embodiment of the application, the position conversion relation between the images of the same object collected by the second camera and the first camera is determined by means of the first camera when the image collection range is appointed to collect the images, the coincidence degree between key points of the target object is determined, whether the use of the first camera by a user is in compliance is judged based on the coincidence degree between the key points of the target object, the condition that the first camera is adjusted to be not collected to an effective area by the user can be effectively found, and the use of the first camera by the user can be normalized.
Fig. 5 is a flowchart of a method for controlling use of a camera according to an embodiment of the present application, including the following steps.
In step S501, a first in-vehicle image captured by a first camera and a second in-vehicle image captured by a second camera are acquired.
In step S502, position information of the target object in the designated area is acquired from the first in-vehicle image and the second in-vehicle image, respectively, where the position information includes position information of a key point of the target object and position area information of the target object.
In step S503, the conversion processing is performed on the position information of the target object acquired from the second in-vehicle image based on the position conversion relationship between the images of the same object acquired by the second camera and the first camera when the first camera performs image acquisition in the specified image acquisition range.
In step S504, a first specification value is determined based on the position information of the key points of the target object after the conversion and the position information of the key points of the target object extracted from the first in-vehicle image.
The first standard value is used for representing the coincidence degree between the key points of the converted target object and the key points of the target object extracted from the first in-vehicle image.
In step S505, a second specification value is determined based on the position area information of the target object after the conversion and the position area information of the target object extracted from the first in-vehicle image.
And the second standard value is used for representing the coincidence degree between the position area of the converted target object and the position area of the target object extracted from the first in-vehicle image.
For example, the second norm value sim2 is determined according to the following formula:
Figure BDA0003085218460000131
Figure BDA0003085218460000132
wherein A represents the location Area of the converted target object, AreaAArea representing the position Area of the target object after conversion, B represents the position Area of the target object in the first in-vehicle image, AreaBAn area representing a location area of the target object in the first in-vehicle image.
From the above formula, it can be seen that, when the area of the position region of the target object in the first in-vehicle image overlaps the area of the position region of the converted target object more, the value of sim2 is larger, and the probability that the target object falls within the allowable image acquisition range is larger as the overlap is larger, so sim2 can be used to represent the degree of overlap between the position regions of the target objects.
In step S506, it is determined whether the actual image capturing range of the first camera exceeds the allowable image capturing range according to the first specification value and the second specification value.
For example, when the average value of the first standard value and the second standard value is greater than a preset value, it is determined that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range; and when the average value of the first standard value and the second standard value is not larger than the preset value, determining that the actual image acquisition range of the first camera exceeds the allowable image acquisition range.
In the embodiment of the application, by means of the position conversion relation between the images of the same object acquired by the second camera and the first camera when the first camera is used for acquiring the images in the appointed image acquisition range, the coincidence degree between key points of the target object and the coincidence degree between position areas of the target object are determined, and whether the use of the first camera by a user is in compliance or not is judged based on the two coincidence degrees, so that the condition that the first camera is adjusted to be not acquired by the user to an effective area can be effectively found, and the use of the first camera by the user can be standardized.
In addition, the inventor finds that after the moving speed of the vehicle reaches 30km/h, the sitting posture of the user is compared with the standard, and the accuracy of judging whether the user uses the first camera in compliance is higher. Therefore, in any of the above embodiments, the moving speed of the vehicle may be detected first, and when the detected moving speed is greater than the set speed, for example, 30km/h, the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera may be acquired, so as to improve the accuracy of the compliance judgment.
In addition, in practical application, the first camera and the second camera may be blocked, so that whether the first camera is blocked or not can be determined by using the position information acquisition result of the first in-vehicle image, and whether the second camera is blocked or not can also be determined by using the position information acquisition result of the second in-vehicle image.
For example, during the running of the vehicle, for the first camera, if the position information of the target object is not obtained from the first in-vehicle image, it is determined that the first camera is blocked, if the position information of the target object is obtained from the first in-vehicle image, the probability that the first camera is blocked can be calculated by using the position information of the target object, when the calculated probability exceeds the set probability, it is determined that the first camera is blocked, and when the calculated probability does not exceed the set probability, it is determined that the first camera is not blocked. The occlusion judgment of the second camera is similar, and is not repeated herein.
In addition, during specific implementation, when the first camera or the second camera is determined to be shielded, a shielding event of the corresponding camera can be reported, and when the actual image acquisition range of the first camera is determined to exceed the allowed image acquisition range, a non-compliance event in the use of the first camera can be reported, so that related personnel can take corresponding processing measures, and the network appointment operation effect is improved.
The technical solution of the present application will be described below with reference to specific examples.
Fig. 6 is a schematic flowchart of usage control of a camera according to an embodiment of the present disclosure, which includes a pre-calibration module, a DVR occlusion detection module, and a DMS compliance detection module. The modules are described separately below with reference to fig. 6.
First, precalibrating module
The method is mainly used for calibrating the position conversion relation between the image of the same object collected by the DVR camera and the image collected by the DMS camera, the position conversion relation can be expressed as a mapping matrix, and the mapping matrix is called as a homography matrix H in the following. In practical application, after the DVR camera and the DMS camera are both installed at the appointed positions, the homography matrix H can be determined.
Referring to fig. 7, the process of determining the homography matrix H includes:
the method comprises the steps of firstly, acquiring an image 1 acquired by a DVR and an image 2 acquired by a DMS, and performing graying processing on the image 1 and the image 2 respectively;
secondly, defining a driving position region ROI1 in the grayed image 1 and defining a driving position region ROI2 in the grayed image 2;
and thirdly, respectively extracting feature points of the driving positions from the ROI1 and the ROI 2.
In specific implementation, the following formulas are used to process the ROI1 and ROI2 respectively:
Figure BDA0003085218460000151
Figure BDA0003085218460000152
Figure BDA0003085218460000153
IFeat=IGrad*ISeg
wherein, IxAnd IyGradients in the horizontal and vertical directions, respectively; th1 and th2 are predetermined grayscale thresholds; mean is the gray level mean value of the image; i isGradGradient intensity, which is not retained when the intensity is less than a certain threshold; i issegDividing the region into divided regions; i isFeatThe feature point set is obtained by integrating the gradient and the segmentation characteristics.
Then, from the corresponding I of ROI1FeatSelecting pixel points with the gray value of 255 from the image to form a feature point set Feat1, and selecting I corresponding to ROI2FeatAnd selecting pixel points with the gray value of 255 from the image to form a feature point set Feat 2.
Fourthly, calculating feature descriptors of all feature points in Feat1 by adopting a BRIEF algorithm to obtain a feature descriptor set Feat _ det1, calculating feature descriptors of all feature points in Feat2 by adopting the BRIEF algorithm to obtain a feature descriptor set Feat _ det2, and then determining matched feature points in Feat _ det1 and Feat _ det2 based on the feature descriptors of all feature points.
And fifthly, calculating the homography matrix H by using the feature descriptors of the matched feature points in the Feat _ det1 and the Feat _ det 2.
Second, DVR shelters from the detection module
The method is mainly used for detecting whether the DVR camera is shielded or not, and when the DVR camera is not shielded, whether the DMS camera is in compliance or not is judged by utilizing the image collected by the DVR camera and the homography matrix H.
The method comprises the steps of firstly, obtaining a plurality of images collected by a DVR camera in advance, intercepting and marking the images of a driving seat area in each image, wherein marking information comprises human body position information, human face position information and human face characteristic point information of a user, such as coordinate positions of five characteristic points of a left eye, a right eye, a nose tip and a left mouth corner;
secondly, training a detection model B _ det _ model by using a deep learning frame such as retinaface by taking each image as input and the labeled information of each image as output;
and thirdly, under the network appointment vehicle operation state, when the vehicle speed is 30km/h, acquiring an in-vehicle image collected by the DVR camera in real time, detecting the in-vehicle image collected by the DVR camera by using a B _ det _ model, determining that the current DVR camera is not shielded when the B _ det _ model can output three information of a human body, a human face and a human face key point, and otherwise, determining that the current DVR camera is shielded, and reporting a shielding event of the DVR camera.
Third, DMS compliance detection module
The method is mainly used for judging whether the use of the DMS camera by a user is in compliance or not by utilizing the homography matrix H, DVR obtained by the pre-calibration module to shield the face and the position information of the key points of the face obtained by the detection module. Referring to fig. 8, the method comprises the following steps:
the method comprises the steps of firstly, acquiring in-vehicle images acquired by a DMS camera when the vehicle speed is more than 30km/h in advance, intercepting and labeling images of a driving seat area in each image, wherein labeling information comprises face position information and face characteristic point information such as coordinate positions of five characteristic points of left and right eyes, nose tips and left and right mouth corners of a person;
here, the human body is not labeled because the human body is not fully observed by the typical DMS camera.
And secondly, training a detection model C _ det _ model by using each image as input and label information of each image as output and utilizing a deep learning frame such as retinaface.
Thirdly, acquiring an in-vehicle image acquired by a DMS camera when the vehicle speed is more than 30km/h in the network car booking operation process, and detecting the acquired in-vehicle image by using a C _ det _ model to obtain the face position coordinate information (C) of the userx1,Cy1)、(Cx2,Cy2) And 5 pieces of position information (Cxi, Cyi) of key points of the face, wherein the value of i is 1, the position of the face in the whole image is represented when the value of i is 2, the position can be represented by 2 coordinates (such as coordinates at two ends of a diagonal line of the face), the value of i is 3, 4, 5, 6 and 7, and the values sequentially represent the coordinates of the left eye, the right eye, the nose tip, the left mouth corner and the right mouth corner. Moreover, the face position information and the position information of the face key points can be obtained from the DVR shielding detection module, and the homography matrix H is adopted to transform the face position information and the position information of the face key points to obtain (HBxi, HByi), and the value of i ranges from 1 to 7.
Fourthly, the deviation degree detection module is used for evaluating the fitting degree between the current DMS camera and the specified position, and the fitting degree can be evaluated from the following two aspects:
in a first aspect: face position information
A=(HBx1,HBy1,HBx2,HBy2),
B=(Cx1,Cy1,Cx2,Cy2),
Figure BDA0003085218460000171
Figure BDA0003085218460000172
The meaning of each parameter is the same as before, and is not described herein again.
In a second aspect: face key point
Figure BDA0003085218460000173
Figure BDA0003085218460000174
Figure BDA0003085218460000175
The meaning of each parameter is the same as before, and is not described herein again.
Further, the offset calculation can be completed by integrating sim1 and sim2, for example, when the sim ═ value (sim1+ sim2)/2 is greater than the preset threshold th3, the use compliance of the current DMS camera is determined, otherwise, the use non-compliance of the current DMS camera is determined, and a DMS non-compliance event is reported.
In the embodiment of the application, whether DVR and DMS camera are sheltered from or not can be judged, moreover, under the condition that the DVR camera is not sheltered from, whether the use of user to the DMS camera is in compliance or not can be judged by means of the image that the DVR camera was gathered, so, the operation management of the operation side of the network car booking is facilitated, and the use of standard user to the network car booking is also facilitated.
When the method provided in the embodiments of the present application is implemented in software or hardware or a combination of software and hardware, a plurality of functional modules may be included in the electronic device, and each functional module may include software, hardware or a combination of software and hardware.
Fig. 9 is a schematic structural diagram of a device for controlling use of a camera according to an embodiment of the present application, and includes an image acquisition unit 901, a position acquisition unit 902, a conversion unit 903, and a determination unit 904.
The image acquiring unit 901 is configured to acquire a first in-vehicle image acquired by a first camera and a second in-vehicle image acquired by a second camera, where an orientation of the first camera is fixed, an image acquisition range in the orientation is adjustable, and both the orientation and the image acquisition range of the second camera are fixed;
a position acquisition unit 902, configured to acquire position information of a target object in a specified area from the first in-vehicle image and the second in-vehicle image, respectively;
a conversion unit 903, configured to perform conversion processing on position information acquired from the second in-vehicle image based on a position conversion relationship between images of the same object acquired by the second camera and the first camera when the first camera performs image acquisition in a specified image acquisition range;
a determining unit 904, configured to determine whether an actual image capturing range of the first camera exceeds an allowable image capturing range based on the converted position information and the position information acquired from the first in-vehicle image, where the allowable image capturing range is predetermined according to the specified image capturing range.
In a possible implementation manner, the location information at least includes location information of a key point of the target object, and the determining unit 904 specifically includes:
a first specification subunit 9041, configured to determine a first specification value based on the position information of the key point of the converted target object and the position information of the key point extracted from the first in-vehicle image, where the first specification value is used to represent a degree of coincidence between the key point of the converted target object and the key point extracted from the first in-vehicle image;
and the judging subunit 9042 is configured to determine, according to the first specification value, whether an actual image acquisition range of the first camera exceeds an allowable image acquisition range.
In a possible implementation, the first specification subunit 9041 is specifically configured to:
determining the distance between the key points based on the converted position information of each key point of the target object and the position information of the corresponding key point extracted from the first in-vehicle image;
based on the distances between the keypoints, a first specification value is determined.
In a possible implementation, the first specification subunit 9041 is specifically configured to determine the first specification value sim1 according to the following formula: :
Figure BDA0003085218460000191
Figure BDA0003085218460000192
wherein d isiThe distance between the key points i is shown, n is the number of key points, and r is the set radius.
In a possible implementation, the position information further includes position area information of the target object, and the determining unit 904 further includes:
a second specification subunit 9043, configured to determine a second specification value based on the converted position area information of the target object and the position area information extracted from the first in-vehicle image, where the second specification value is used to represent a degree of coincidence between the converted position area of the target object and the position area extracted from the first in-vehicle image;
the judging subunit 9042 is further configured to determine, according to the first standard value and the second standard value, whether an actual image acquisition range of the first camera exceeds an allowable image acquisition range.
In a possible implementation, the second specification subunit 9043 is specifically configured to determine the second specification value sim2 according to the following formula:
Figure BDA0003085218460000201
Figure BDA0003085218460000202
wherein A represents the location Area of the target object after conversion, AreaARepresenting the target object after transformationB represents a location Area, of the target object in the first in-vehicle imageBAn area representing a location area of the target object in the first in-vehicle image.
In a possible implementation manner, the determining subunit 9042 is specifically configured to:
and when the average value of the first standard value and the second standard value is larger than a preset value, determining that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range.
In one possible embodiment, the method further comprises:
a detection unit 905 for detecting a moving speed of the vehicle;
the image obtaining unit 901 is further configured to obtain the first in-vehicle image collected by the first camera and the second in-vehicle image collected by the second camera when the detected movement speed is greater than a set speed.
In a possible implementation, the occlusion determining unit 906 is further included to:
determining whether the first camera is blocked or not based on the position information acquisition result of the first in-vehicle image; and/or
And determining whether the second camera is blocked or not based on the position information acquisition result of the second in-vehicle image.
The division of the modules in the embodiments of the present application is schematic, and only one logic function division is provided, and in actual implementation, there may be another division manner, and in addition, each function module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device includes a transceiver 1001 and a processor 1002, and the processor 1002 may be a Central Processing Unit (CPU), a microprocessor, an application specific integrated circuit, a programmable logic circuit, a large scale integrated circuit, or a digital Processing Unit. The transceiver 1001 is used for data transmission and reception between an electronic device and other devices.
The electronic device may further comprise a memory 1003 for storing software instructions executed by the processor 1002, and may of course also store some other data required by the electronic device, such as identification information of the electronic device, encryption information of the electronic device, user data, etc. The Memory 1003 may be a Volatile Memory (Volatile Memory), such as a Random-Access Memory (RAM); the Memory 1003 may also be a Non-Volatile Memory (Non-Volatile Memory) such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD) or a Solid-State Drive (SSD), or the Memory 1003 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 1003 may be a combination of the above memories.
The embodiment of the present application does not limit the specific connection medium among the processor 1002, the memory 1003, and the transceiver 1001. In the embodiment of the present application, only the memory 1003, the processor 1002, and the transceiver 1001 are connected by the bus 1004 in fig. 10, the bus is shown by a thick line in fig. 10, and the connection manner between the other components is only schematically illustrated and is not limited thereto. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The processor 1002 may be dedicated hardware or a processor running software, and when the processor 1002 can run software, the processor 1002 reads software instructions stored in the memory 1003 and executes the usage control method of the camera according to the foregoing embodiment under the driving of the software instructions.
The embodiment of the application also provides a storage medium, and when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the use control method of the camera involved in the foregoing embodiment.
In some possible embodiments, various aspects of the usage control method for a camera provided in the present application may also be implemented in the form of a program product, where the program product includes program code, and when the program product runs on an electronic device, the program code is used to make the electronic device execute the usage control method for a camera in the foregoing embodiments.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable Disk, a hard Disk, a RAM, a ROM, an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a Compact Disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for use control of the camera in the embodiment of the present application may employ a CD-ROM and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device over any kind of Network, such as a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to external computing devices (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (20)

1. A use control method of a camera is characterized by comprising the following steps:
acquiring a first vehicle interior image acquired by a first camera and a second vehicle interior image acquired by a second camera, wherein the orientation of the first camera is fixed, the image acquisition range in the orientation is adjustable, and the orientation and the image acquisition range of the second camera are both fixed;
acquiring position information of a target object in a specified area from the first in-vehicle image and the second in-vehicle image respectively;
performing conversion processing on position information acquired from the second in-vehicle image based on a position conversion relationship between the second camera and the image of the same object acquired by the first camera when the first camera performs image acquisition in a specified image acquisition range;
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range based on the converted position information and the position information acquired from the first in-vehicle image, wherein the allowable image acquisition range is predetermined according to the specified image acquisition range.
2. The method of claim 1, wherein the position information includes at least position information of key points of the target object, and determining whether an actual image capturing range of the first camera is beyond an allowable image capturing range based on the converted position information and the position information acquired from the first in-vehicle image comprises:
determining a first standard value based on the position information of the key points of the target object after conversion and the position information of the key points extracted from the first in-vehicle image, wherein the first standard value is used for representing the coincidence degree between the key points of the target object after conversion and the key points extracted from the first in-vehicle image;
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range or not according to the first standard value.
3. The method of claim 2, wherein determining a first normative value based on the transformed location information of the keypoints of the target object and the location information of the keypoints extracted from the first in-vehicle image comprises:
determining the distance between the key points based on the converted position information of each key point of the target object and the position information of the corresponding key point extracted from the first in-vehicle image;
based on the distances between the keypoints, a first specification value is determined.
4. A method according to claim 3, wherein the first norm value sim1 is determined according to the formula:
Figure FDA0003085218450000021
Figure FDA0003085218450000022
wherein d isiThe distance between the key points i is shown, n is the number of key points, and r is the set radius.
5. The method of claim 2, wherein the location information further includes location area information of the target object, further comprising:
determining a second standard value based on the position area information of the target object after conversion and the position area information extracted from the first in-vehicle image, wherein the second standard value is used for representing the coincidence degree between the position area of the target object after conversion and the position area extracted from the first in-vehicle image;
determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first specification value, wherein the determining comprises the following steps:
and determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range or not according to the first standard value and the second standard value.
6. The method of claim 5, wherein the second norm value sim2 is determined according to the formula:
Figure FDA0003085218450000023
Figure FDA0003085218450000024
wherein A represents the location Area of the target object after conversion, AreaAAfter the representation is convertedAn Area of a position Area of the target object, B represents a position Area of the target object in the first in-vehicle image, AreaBAn area representing a location area of the target object in the first in-vehicle image.
7. The method of claim 5, wherein determining whether an actual image acquisition range of the first camera is outside of an allowed image acquisition range based on the first and second norm values comprises:
and when the average value of the first standard value and the second standard value is larger than a preset value, determining that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range.
8. The method of claim 1, further comprising:
detecting the movement speed of the vehicle;
and when the detected movement speed is greater than a set speed, acquiring the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera.
9. The method of any of claims 1-8, further comprising:
determining whether the first camera is blocked or not based on the position information acquisition result of the first in-vehicle image; and/or
And determining whether the second camera is blocked or not based on the position information acquisition result of the second in-vehicle image.
10. A use control device of a camera, characterized by comprising:
the system comprises an image acquisition unit, a first vehicle interior image acquisition unit and a second vehicle interior image acquisition unit, wherein the first vehicle interior image acquisition unit is used for acquiring a first vehicle interior image acquired by a first camera and a second vehicle interior image acquired by a second camera, the first camera is fixed in orientation, the image acquisition range in the orientation is adjustable, and the orientation and the image acquisition range of the second camera are both fixed;
a position acquisition unit configured to acquire position information of a target object in a specified area from the first in-vehicle image and the second in-vehicle image, respectively;
the conversion unit is used for converting the position information acquired from the second in-vehicle image based on the position conversion relation between the images of the same object acquired by the second camera and the first camera when the first camera acquires the image in the designated image acquisition range;
and the determining unit is used for determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range based on the converted position information and the position information acquired from the first in-vehicle image, wherein the allowable image acquisition range is predetermined according to the specified image acquisition range.
11. The apparatus according to claim 10, wherein the location information at least includes location information of a key point of the target object, and the determining unit specifically includes:
a first specification subunit, configured to determine a first specification value based on the converted location information of the key point of the target object and the location information of the key point extracted from the first in-vehicle image, where the first specification value is used to represent a degree of coincidence between the converted key point of the target object and the key point extracted from the first in-vehicle image;
and the judging subunit is used for determining whether the actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first standard value.
12. The apparatus as recited in claim 11, wherein said first specification subunit is specifically configured to:
determining the distance between the key points based on the converted position information of each key point of the target object and the position information of the corresponding key point extracted from the first in-vehicle image;
based on the distances between the keypoints, a first specification value is determined.
13. The apparatus as recited in claim 12, wherein said first specification subunit is specifically configured to determine a first specification value sim1 according to the following formula:
Figure FDA0003085218450000041
Figure FDA0003085218450000042
wherein d isiThe distance between the key points i is shown, n is the number of key points, and r is the set radius.
14. The apparatus of claim 11, wherein the location information further includes location area information of the target object, the determining unit further comprising:
a second specification subunit, configured to determine a second specification value based on the converted position area information of the target object and the position area information extracted from the first in-vehicle image, where the second specification value is used to represent a degree of coincidence between the converted position area of the target object and the position area extracted from the first in-vehicle image;
the judging subunit is further configured to determine whether an actual image acquisition range of the first camera exceeds an allowable image acquisition range according to the first specification value and the second specification value.
15. The apparatus as recited in claim 14, wherein said second specification subunit is specifically configured to determine a second specification value sim2 according to the following formula:
Figure FDA0003085218450000051
Figure FDA0003085218450000052
wherein A represents the location Area of the target object after conversion, AreaARepresenting the Area of the position Area of the target object after conversion, B representing the position Area of the target object in the first in-vehicle image, AreaBAn area representing a location area of the target object in the first in-vehicle image.
16. The apparatus according to claim 15, wherein the determining subunit is specifically configured to:
and when the average value of the first standard value and the second standard value is larger than a preset value, determining that the actual image acquisition range of the first camera does not exceed the allowable image acquisition range.
17. The apparatus of claim 10, further comprising:
a detection unit for detecting a moving speed of the vehicle;
the image acquisition unit is further configured to acquire the first in-vehicle image acquired by the first camera and the second in-vehicle image acquired by the second camera when the detected movement speed is greater than a set speed.
18. The apparatus according to any one of claims 10 to 17, further comprising an occlusion determination unit for:
determining whether the first camera is blocked or not based on the position information acquisition result of the first in-vehicle image; and/or
And determining whether the second camera is blocked or not based on the position information acquisition result of the second in-vehicle image.
19. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-9.
CN202110578154.9A 2021-05-26 2021-05-26 Camera use control method and device, electronic equipment and storage medium Active CN113286086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110578154.9A CN113286086B (en) 2021-05-26 2021-05-26 Camera use control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110578154.9A CN113286086B (en) 2021-05-26 2021-05-26 Camera use control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113286086A true CN113286086A (en) 2021-08-20
CN113286086B CN113286086B (en) 2022-02-18

Family

ID=77281842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110578154.9A Active CN113286086B (en) 2021-05-26 2021-05-26 Camera use control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113286086B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339159A (en) * 2021-12-31 2022-04-12 深圳市平方科技股份有限公司 Image acquisition method and device, electronic equipment and storage medium
CN114612762A (en) * 2022-03-15 2022-06-10 首约科技(北京)有限公司 Intelligent equipment supervision method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018193412A1 (en) * 2017-04-20 2018-10-25 Octo Telematics Spa Platform for the management and validation of contents of video images, pictures or similars, generated by different devices
CN110458895A (en) * 2019-07-31 2019-11-15 腾讯科技(深圳)有限公司 Conversion method, device, equipment and the storage medium of image coordinate system
WO2020055992A1 (en) * 2018-09-11 2020-03-19 NetraDyne, Inc. Inward/outward vehicle monitoring for remote reporting and in-cab warning enhancements
CN111553947A (en) * 2020-04-17 2020-08-18 腾讯科技(深圳)有限公司 Target object positioning method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018193412A1 (en) * 2017-04-20 2018-10-25 Octo Telematics Spa Platform for the management and validation of contents of video images, pictures or similars, generated by different devices
WO2020055992A1 (en) * 2018-09-11 2020-03-19 NetraDyne, Inc. Inward/outward vehicle monitoring for remote reporting and in-cab warning enhancements
CN110458895A (en) * 2019-07-31 2019-11-15 腾讯科技(深圳)有限公司 Conversion method, device, equipment and the storage medium of image coordinate system
CN111553947A (en) * 2020-04-17 2020-08-18 腾讯科技(深圳)有限公司 Target object positioning method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339159A (en) * 2021-12-31 2022-04-12 深圳市平方科技股份有限公司 Image acquisition method and device, electronic equipment and storage medium
CN114339159B (en) * 2021-12-31 2023-06-27 深圳市平方科技股份有限公司 Image acquisition method and device, electronic equipment and storage medium
CN114612762A (en) * 2022-03-15 2022-06-10 首约科技(北京)有限公司 Intelligent equipment supervision method

Also Published As

Publication number Publication date
CN113286086B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
EP3295424B1 (en) Systems and methods for reducing a plurality of bounding regions
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
CN109635783B (en) Video monitoring method, device, terminal and medium
US8385645B2 (en) Object detecting device, imaging apparatus, object detecting method, and program
US10096091B2 (en) Image generating method and apparatus
US9760800B2 (en) Method and system to detect objects using block based histogram of oriented gradients
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
CN113286086B (en) Camera use control method and device, electronic equipment and storage medium
US20190156499A1 (en) Detection of humans in images using depth information
US20180047271A1 (en) Fire detection method, fire detection apparatus and electronic equipment
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
US11694331B2 (en) Capture and storage of magnified images
CN111753721A (en) Human body posture recognition method and device
JP5656768B2 (en) Image feature extraction device and program thereof
CN110795975B (en) Face false detection optimization method and device
US20170103536A1 (en) Counting apparatus and method for moving objects
CN111368698A (en) Subject recognition method, subject recognition device, electronic device, and medium
CN113158773B (en) Training method and training device for living body detection model
CN104715250A (en) Cross laser detection method and device
US10922791B2 (en) Image processing apparatus and method
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
JP6163732B2 (en) Image processing apparatus, program, and method
JP2008084109A (en) Eye opening/closing determination device and eye opening/closing determination method
CN115082326A (en) Processing method for deblurring video, edge computing equipment and central processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant