CN113470103B - Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment - Google Patents

Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment Download PDF

Info

Publication number
CN113470103B
CN113470103B CN202110724108.5A CN202110724108A CN113470103B CN 113470103 B CN113470103 B CN 113470103B CN 202110724108 A CN202110724108 A CN 202110724108A CN 113470103 B CN113470103 B CN 113470103B
Authority
CN
China
Prior art keywords
image
camera
focal length
pixel
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110724108.5A
Other languages
Chinese (zh)
Other versions
CN113470103A (en
Inventor
邓烽
时一峰
苑立彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202110724108.5A priority Critical patent/CN113470103B/en
Publication of CN113470103A publication Critical patent/CN113470103A/en
Priority to PCT/CN2021/135146 priority patent/WO2023273158A1/en
Application granted granted Critical
Publication of CN113470103B publication Critical patent/CN113470103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The application discloses a method and a device for determining a camera acting distance in vehicle-road cooperation and road side equipment, relates to the technical field of intelligent traffic, and particularly relates to the technical field of vision processing. The specific implementation scheme is as follows: firstly, acquiring an acquisition image and a pixel focal length of a camera, then intercepting an intercepted image comprising a target object from the acquisition image, finally determining the maximum acting distance of the camera based on the pixel focal length and the unit distance pixel number of the intercepted image, wherein the unit distance pixel number of the intercepted image is the pixel number included in the minimum unit which can be identified by a detection model in the intercepted image, and the automatic adjustment of the acting distance of the camera can be realized by intercepting the acquisition image, so that the flexibility of adjusting the acting distance of the camera is improved.

Description

Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment
Technical Field
The disclosure relates to the technical field of intelligent transportation, in particular to the technical field of visual processing, and especially relates to a method and a device for determining a camera acting distance in vehicle-road cooperation and road side equipment.
Background
In the construction of the vehicle-road cooperation V2X infrastructure, the road side perception system provides perception information of beyond visual range for vehicle-road cooperation. The camera is used as one of the most main sensors of the road side sensing system, and the acting distance is an important index for measuring the sensing system.
The traditional method is to directly use an original image provided by a camera, de-distort the original image according to internal parameters of the camera, and then to perform two-dimensional plane detection and three-dimensional sensing positioning in the whole image, wherein the acting distance is directly determined based on the original image.
Disclosure of Invention
The disclosure provides a method, a device, electronic equipment, a storage medium, a computer program product, road side equipment and a cloud control platform for determining a camera working distance in vehicle-road coordination.
According to an aspect of the present disclosure, there is provided a method for determining a camera working distance in vehicle-road coordination, the method including: acquiring an acquired image and a pixel focal length of a camera; intercepting an intercepted image comprising a target object from the acquired image; and determining the maximum acting distance of the camera based on the pixel focal length and the unit distance pixel number of the intercepted image, wherein the unit distance pixel number of the intercepted image is the pixel number included in the minimum unit which can be identified by the detection model in the intercepted image.
According to another aspect of the present disclosure, there is provided a camera working distance determining apparatus in vehicle-road cooperation, the apparatus including: an acquisition module configured to acquire an acquired image of the camera and a pixel focal length; a capture module configured to capture a captured image including a target object from the captured image; and a determining module configured to determine a maximum working distance of the camera based on the pixel focal length and a unit distance pixel number of the captured image, wherein the unit distance pixel number of the captured image is a pixel number included in a minimum unit that can be recognized by the detection model in the captured image.
According to another aspect of the present disclosure, there is provided an electronic device including at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the camera working distance determination method in vehicle-road coordination.
According to another aspect of the present disclosure, an embodiment of the present disclosure provides a computer-readable medium having stored thereon computer instructions for enabling a computer to perform the above-described method for determining a camera working distance in vehicle-road coordination.
According to another aspect of the disclosure, an embodiment of the present application provides a computer program product, which includes a computer program, and the computer program when executed by a processor implements the method for determining a camera working distance in vehicle-road coordination.
According to another aspect of the disclosure, an embodiment of the present application provides a roadside device, including an electronic device as described above.
According to another aspect of the disclosure, an embodiment of the present application provides a cloud control platform, including an electronic device as described above.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of one embodiment of a method of camera range determination in vehicle-to-road coordination according to the present disclosure;
FIG. 2 is a schematic diagram of an application scenario of a camera range determination method in vehicle-road coordination according to the present disclosure;
FIG. 3 is a flow chart of one embodiment of acquiring a number of pixels per unit distance corresponding to a truncated image according to the present disclosure;
FIG. 4 is a flow chart of one embodiment of determining a geographic location of a target object according to the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of a camera range determination apparatus in vehicle-road coordination according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a camera range determination method in vehicle-road collaboration in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Referring to fig. 1, fig. 1 shows a flow diagram 100 of an embodiment of a camera range determination method that may be applied in vehicle-to-road coordination of the present disclosure. The method for determining the camera action distance in the vehicle-road cooperation comprises the following steps:
step 110, acquiring an acquired image and a pixel focal length of a camera.
In this embodiment, the execution body (for example, a terminal device or a server) of the camera action distance determining method may receive a camera parameter input by a user, and further calculate, after acquiring the camera parameter, according to the camera parameter, to acquire a pixel focal length of the camera, where the pixel focal length is a focal length in units of pixels. As an example, the above-described execution body may acquire the physical focal length of the camera and the physical size of the pixel, and calculate the formula according to the pixel focal length: pixel focal length = physical focal length/physical size of the pixel, the pixel focal length of the camera is obtained.
The execution body may further acquire an acquired image of the camera with the camera, and the acquired image may include a target object for performing distance measurement.
Step 120, a truncated image including the target object is truncated from the acquired image.
In this embodiment, the execution subject may perform image processing on the acquired image by using an image processing method, and the image processing method may include image correction, image filtering, image graying, image enhancement, and the like. The execution subject may perform image segmentation on the processed acquired image, that is, the image is divided into a plurality of specific regions with unique properties, the image segmentation method mainly includes a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like, and in the field of deep learning, the execution subject may perform image segmentation using a multi-layer neural network model, such as a deep neural network or a convolutional neural network, and may perform segmentation on the acquired image by the image segmentation method, and intercept an intercepted image including the target object from the acquired image.
The execution body can also identify the target object of the acquired acquisition image and determine the position information of the target object in the acquisition image. And then the execution main body intercepts the acquired image according to the determined position information to acquire an intercepted image comprising the target object.
Step 130, determining the maximum working distance of the camera based on the pixel focal length and the pixel number of unit distance of the intercepted image.
In this embodiment, after the executing body obtains the pixel focal length of the camera, the number of pixels per unit distance corresponding to the truncated image may be obtained by performing calculation according to the collected image and the truncated image or performing calculation according to the truncated image and the detection model, where the number of pixels per unit distance corresponding to the truncated image may be the number of pixels included in the minimum unit that can be identified by the detection model in the truncated image, and the detection model is a model for detecting the target object. The detection model corresponds to a minimum target object capable of being detected, the number of pixels included in the unit distance corresponding to the minimum target object can be determined according to the actual size of the minimum target object, and the execution body can determine the number of pixels in the unit distance of the intercepted image according to the intercepted image and the detection model.
Or the execution body may determine the number of pixels per unit distance corresponding to the acquired image, determine a ratio value between the acquired image and the captured image according to the resolution of the acquired image and the resolution of the captured image, and determine the number of pixels per unit distance corresponding to the captured image according to the ratio value and the number of pixels per unit distance corresponding to the acquired image.
After the execution body obtains the pixel focal length of the camera and the number of unit distance pixels corresponding to the intercepted image, the maximum acting distance of the camera can be calculated according to the ratio relation between the pixel focal length and the number of unit distance pixels, the maximum acting distance can be the furthest distance of the camera for detecting the target object, the execution body can calculate the maximum acting distance of the camera according to the formula, and the formula can be as follows:
max_distance=focal/min_pixels_per_meter
wherein max_distance represents the maximum working distance of the camera, focal represents the pixel focal length of the camera, and min_pixels_per_meter represents the number of pixels per unit distance corresponding to the captured image.
As an example, the resolution of the collected image is (w 1, h 1), the number of unit distance pixels corresponding to the collected image is min1, the resolution of the truncated image is (w 2, h 2) from the collected image, and the executing body may determine that the ratio value between the resolution of the collected image and the resolution of the truncated image is w1/w2, and determine that the number of unit distance pixels corresponding to the truncated image is min 1/(w 1/w 2). The execution subject may determine that the maximum acting distance of the camera is max_distance 1=focal/min 1 under the acquired image; under the condition of intercepting images, the maximum acting distance of the camera is max_distance=focal/min 1/(w 1/w 2).
With continued reference to fig. 2, fig. 2 is a schematic diagram of an application scenario of the method for determining a camera working distance in vehicle-road coordination according to the present embodiment. In the application scenario of fig. 2, the terminal 201 may present the physical parameter input interface of the camera to the user through the display screen, the user may input the physical parameter of the camera in the physical parameter input interface, and the terminal 201 may acquire the focal length of the pixel of the camera and the acquired image including the target object according to the physical parameter. The terminal 201 may perform image processing on the acquired image, intercept an intercepted image including the target object from the acquired image, and then the terminal 201 calculates a maximum working distance of the camera according to a pixel focal length of the camera and a unit distance pixel number of the intercepted image.
According to the camera action distance determining method in the vehicle-road cooperation, through acquiring the acquired image and the pixel focal length of the camera, then intercepting the intercepted image comprising the target object from the acquired image, finally determining the maximum action distance of the camera based on the pixel focal length and the unit distance pixel number of the intercepted image, wherein the unit distance pixel number of the intercepted image is the pixel number included in the minimum unit which can be identified by the detection model in the intercepted image, automatic adjustment of the camera action distance can be realized through intercepting the acquired image, and as the intercepted image is obtained by intercepting the acquired image, a certain proportional relationship exists between the resolution of the intercepted image and the resolution of the acquired image, the same proportional relationship exists between the unit distance pixel number corresponding to the intercepted image and the unit distance pixel number corresponding to the acquired image, and the change of the unit distance pixel number can be realized by utilizing image interception, so that the change of the camera action distance is realized, the action distance of the camera can be improved on the premise that the focal length of the camera is not changed, the cost of the zoom camera is reduced, and the flexibility of adjusting the camera action distance is improved.
As an optional implementation, the step 110 of obtaining the focal length of the pixel of the camera may include the following steps: acquiring a physical focal length of a camera and imaging sensor parameters; the pixel focal length of the camera is determined based on the physical focal length, imaging sensor parameters, and the resolution of the acquired image.
Specifically, the executing body may read or receive, through a network, a camera parameter input by a user, and obtain a physical parameter of the camera, where the physical parameter of the camera may be a basic parameter of shooting by the camera, and may include parameters of an imaging sensor, a physical focal length, a shutter speed, and the like, which are used to constant camera performance.
The execution main body can provide an input interface of physical parameters of the camera for a user through display equipment such as a display screen, and the user can input the physical parameters of the camera in the input interface, so that the execution main body can acquire the physical focal length of the camera and the parameters of the imaging sensor through user input; or, the executing body may perform network reading on the physical parameters of the camera according to the camera parameters stored in the network, so as to obtain the physical focal length of the camera and the imaging sensor parameters.
The physical parameters of the camera acquired by the execution body include the physical focal length of the camera and imaging sensor parameters, and the resolution of the acquired image of the camera can be further determined. The executing body may determine the pixel focal length of the camera according to the physical focal length of the camera, the imaging sensor parameter and the resolution of the acquired image, by using a pixel focal length calculation formula, where the pixel focal length calculation formula may be:
where focal represents the pixel focal length of the camera, lens represents the physical focal length of the camera, img_width and img_height represent the resolution of the captured image, and sensor_size represents the imaging sensor parameters of the camera.
In the implementation mode, the pixel focal length of the camera is determined through the physical focal length, the imaging sensor parameters and the resolution of the acquired image, and the pixel focal length can be determined according to the calculation relation among the physical focal length, the imaging sensor parameters and the resolution of the acquired image, so that the efficiency and the accuracy of determining the pixel focal length are improved.
As an optional implementation manner, the step 110, obtaining the pixel focal length of the camera may further include the following steps: determining a first internal reference matrix of the camera based on the acquired image and a camera calibration algorithm; the pixel focal length of the camera is obtained from the first internal reference matrix.
Specifically, the executing body may acquire an acquired image, perform calibration processing on the acquired image by using a camera calibration algorithm, calculate a first reference matrix of the camera, where the first reference matrix may include a pixel focal length of the camera and coordinates of a center of a photosensitive plate of the camera under a pixel coordinate system, for example, the executing body may calibrate the acquired image by using a Zhang Zhengyou checkerboard calibration algorithm, and acquire a first reference matrix and an external reference matrix of the camera. The executing body may determine a pixel focal length of the camera from the calculated first internal reference matrix.
In the implementation mode, the pixel focal length of the camera is determined through the camera calibration algorithm, so that the pixel focal length can be rapidly and accurately determined according to the acquired image, and the efficiency and accuracy of determining the pixel focal length are improved.
Referring to fig. 3, fig. 3 illustrates method steps of acquiring a unit distance pixel count of a truncated image, which may include the steps of:
step 310, obtaining the number of pixels per unit distance of the sample image in the detection model.
In this embodiment, the execution body may read the detection model to obtain the number of pixels per unit distance of the sample image in the detection model. The detection model is a model which is trained based on sample images and is used for detecting a target object in an acquired image, the detection model corresponds to a minimum target object which can be detected, and the pixel number which is included in a unit distance and corresponds to the minimum target object can be determined according to the actual size of the minimum target object, so that the pixel number which is included in the unit distance of the sample image in the detection model, namely the pixel number which is included in the unit distance of the sample image and can be identified by the detection model, can be obtained.
Step 320, the resolution of the sample image is obtained and a ratio value between the resolution of the sample image and the resolution of the truncated image is determined.
In this embodiment, the detection model can detect an image with a preset resolution, and can be a model obtained by training sample images with the same resolution. The execution subject may acquire the resolution of the sample image in the detection model, and then determine the resolution of the truncated image. The execution subject may calculate a ratio value between the resolution of the sample image and the resolution of the truncated image based on the resolution of the sample image and the resolution of the truncated image.
Step 330, based on the unit distance pixel number and the proportional value of the sample image, the unit distance pixel number of the truncated image is obtained.
In this embodiment, after the executing body determines the ratio value between the resolution of the sample image and the resolution of the truncated image, the number of pixels per unit distance corresponding to the truncated image may be calculated according to the number of pixels per unit distance of the sample image and the ratio value, so that the number of pixels included in the minimum unit that can be identified by the detection model in the truncated image may be obtained.
In the implementation manner, the number of pixels in a unit distance of the intercepted image is calculated through the proportional relation between the sample image and the intercepted image of the detection model, so that the number of pixels in a minimum unit which can be identified by the detection model in the intercepted image is determined, and the number of pixels in the minimum unit distance which can be identified by the detection model in the intercepted image can be obtained.
Referring to fig. 4, fig. 4 shows method steps of determining a geographic location of a target object, which may include the steps of:
step 410, determining a second reference matrix of the truncated image based on the resolution of the truncated image and the first reference matrix.
In this embodiment, the executing body may determine the first internal reference matrix and the external reference matrix of the camera by using a camera calibration algorithm, where the first internal reference matrix includes a pixel focal length of the camera and a coordinate value of a center of the photosensitive plate of the camera under a pixel coordinate system.
The execution body may determine the resolution of the captured image, and determine the coordinate value of the center of the camera photosheet in the pixel coordinate system under the resolution of the captured image after the first internal reference matrix is acquired, that is, the coordinate value may be half of the resolution of the captured image. The execution body may determine a second internal reference matrix of the truncated image according to the determined focal length of the pixel and the coordinate value corresponding to the truncated image.
As an example, the execution subject performs Zhang Zhengyou checkerboard calibration algorithm on the acquired image to determine a first reference matrix, where the first reference matrix is:
where fx and fy are the focal lengths of the pixels of the camera, and cx and cy are coordinate values of the center of the photosensitive plate of the camera in a pixel coordinate system at the resolution of the acquired image. After the above-described execution body acquires the resolution (W, H) of the captured image, half of the resolution may be taken as the coordinate value of the center of the camera photosheet in the pixel coordinate system at the resolution of the captured image, that is, the coordinate value of the center of the camera photosheet in the pixel coordinate system at the resolution of the captured image is (W/2, H/2), and the focal length of the pixel is maintained unchanged. The second reference matrix corresponding to the truncated image may be determined as:
step 420, determining the geographic location of the target object based on the second internal and external reference matrices of the truncated image.
In this embodiment, after the execution body obtains the external reference matrix and the second internal reference matrix of the intercepted image is obtained, the second internal reference matrix and the external reference matrix of the intercepted image may be used to perform sensing and positioning on the target object in the intercepted image, so as to determine the geographic position of the target object, where the geographic position may be the actual position of the target object and may be represented by coordinates.
In the implementation manner, the second internal reference matrix corresponding to the intercepted image is determined through the pixel focal length of the camera and the resolution of the intercepted image, and the geographic position of the target object is determined according to the second internal reference matrix and the external reference matrix, so that the geographic position of the target object can be determined more accurately, and the accuracy of position detection is improved.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a device for determining a camera working distance in vehicle-road coordination, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device may be specifically applied to various electronic devices.
As shown in fig. 5, the vehicle-road cooperative camera action distance determining apparatus 500 of the present embodiment includes: an acquisition module 510, an interception module 520 and a determination module 530.
Wherein the acquisition module 510 is configured to acquire an acquired image of the camera and a pixel focal length;
a capture module 520 configured to capture a captured image including the target object from the captured image;
a determining module 530 configured to determine a maximum working distance of the camera based on the pixel focal length and a number of unit distance pixels of the truncated image, wherein the number of unit distance pixels of the truncated image is a number of pixels included in a minimum unit of the truncated image that can be identified by the detection model.
In some alternatives of this embodiment, the obtaining module 510 is further configured to: acquiring a physical focal length of a camera and imaging sensor parameters; the pixel focal length of the camera is determined based on the physical focal length, imaging sensor parameters, and the resolution of the acquired image.
In some alternatives of this embodiment, the obtaining module 510 is further configured to: determining a first internal reference matrix of the camera based on the acquired image and a camera calibration algorithm; the pixel focal length of the camera is obtained from the first internal reference matrix.
In some optional ways of this embodiment, the unit distance pixel count of the truncated image is obtained based on the following steps: obtaining the number of unit distance pixels of a sample image in a detection model, wherein the detection model is used for detecting a target object, and the number of unit distance pixels of the sample image is the number of pixels included in a minimum unit which can be identified by the detection model in the sample image; acquiring the resolution of a sample image, and determining a ratio value between the resolution of the sample image and the resolution of a truncated image; based on the number of unit distance pixels of the sample image and the proportional value, the number of unit distance pixels of the truncated image is obtained.
In some alternatives of this embodiment, the determining module 530 is further configured to: determining a second reference matrix of the truncated image based on the resolution of the truncated image and the first reference matrix; and determining the geographic position of the target object based on the second internal reference matrix and the external reference matrix of the intercepted image.
According to the camera action distance determining device in the vehicle-road cooperation, through acquiring the acquired image and the pixel focal length of the camera, then intercepting the intercepted image comprising the target object from the acquired image, finally determining the maximum action distance of the camera based on the pixel focal length and the unit distance pixel number of the intercepted image, wherein the unit distance pixel number of the intercepted image is the pixel number included in the minimum unit which can be identified by the detection model in the intercepted image, automatic adjustment of the camera action distance can be realized through intercepting the acquired image, and as the intercepted image is obtained by intercepting the acquired image, a certain proportional relationship exists between the resolution of the intercepted image and the resolution of the acquired image, the same proportional relationship exists between the unit distance pixel number corresponding to the intercepted image and the unit distance pixel number corresponding to the acquired image, and the change of the unit distance pixel number can be realized by utilizing image interception, so that the change of the camera action distance is realized, the action distance of the camera can be improved on the premise that the focal length of the camera is not changed, the cost of the zoom camera is reduced, and the flexibility of adjusting the camera action distance is improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, a computer program product, a roadside device, and a cloud control platform.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 601 performs the respective methods and processes described above, for example, a camera working distance determination method in vehicle-road cooperation. For example, in some embodiments, the camera range determination method in vehicle-road coordination may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the camera working distance determination method in the road cooperation described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the camera range determination method in vehicle-road coordination in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
Optionally, the road side device may include, besides an electronic device, a communication component, and the electronic device may be integrally integrated with the communication component or may be separately provided. The electronic device may acquire data, such as pictures and videos, of a perception device (e.g., a roadside camera) for image video processing and data computation. Optionally, the electronic device itself may also have a perceived data acquisition function and a communication function, such as an AI camera, and the electronic device may directly perform image video processing and data calculation based on the acquired perceived data.
Optionally, the cloud control platform performs processing at the cloud, and the electronic device included in the cloud control platform may acquire data of the sensing device (such as a roadside camera), for example, pictures, videos, and so on, so as to perform image video processing and data calculation; the cloud control platform can also be called a vehicle-road collaborative management platform, an edge computing platform, a cloud computing platform, a central system, a cloud server and the like.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. A method for determining the acting distance of a camera in vehicle-road cooperation comprises the following steps:
acquiring an acquired image of a camera and a pixel focal length, wherein the pixel focal length is a focal length taking a pixel as a unit, and the pixel focal length is acquired based on the ratio between the physical focal length of the camera and the physical size of the pixel;
intercepting an intercepted image comprising a target object from the acquired image;
and determining the maximum acting distance of the camera based on the pixel focal length and the unit distance pixel number of the intercepted image, wherein the unit distance pixel number of the intercepted image is the pixel number included in the minimum unit which can be identified by the detection model in the intercepted image, and the unit distance pixel number of the intercepted image is determined based on the resolution ratio value between the acquired image and the intercepted image and the unit distance pixel number of the acquired image.
2. The method of claim 1, wherein the acquiring the pixel focal length of the camera comprises:
acquiring a physical focal length and imaging sensor parameters of the camera;
a pixel focal length of the camera is determined based on the physical focal length, the imaging sensor parameters, and the resolution of the captured image.
3. The method of claim 1, wherein the acquiring the pixel focal length of the camera comprises:
determining a first internal reference matrix of the camera based on the acquired image and a camera calibration algorithm;
and acquiring the pixel focal length of the camera from the first internal reference matrix.
4. The method of claim 1, wherein the number of unit distance pixels of the truncated image is obtained based on:
acquiring the number of unit distance pixels of a sample image in the detection model, wherein the detection model is used for detecting the target object, and the number of unit distance pixels of the sample image is the number of pixels included in a minimum unit which can be identified by the detection model in the sample image;
acquiring the resolution of the sample image, and determining a ratio value between the resolution of the sample image and the resolution of the truncated image;
and acquiring the unit distance pixel number of the intercepted image based on the unit distance pixel number of the sample image and the proportion value.
5. A method according to claim 3, wherein the method further comprises:
determining a second internal reference matrix corresponding to the intercepted image based on the resolution of the intercepted image and the first internal reference matrix;
and determining the geographic position of the target object based on the second internal reference matrix and the external reference matrix corresponding to the intercepted image.
6. A camera action distance determining device in vehicle-road cooperation comprises:
an acquisition module configured to acquire an acquired image of a camera and a pixel focal length, wherein the pixel focal length is a focal length in units of pixels, the pixel focal length being acquired based on a ratio between a physical focal length of the camera and a physical size of the pixel;
an intercepting module configured to intercept an intercepted image including a target object from the acquired image;
and a determining module configured to determine a maximum working distance of the camera based on the pixel focal length and a number of unit distance pixels of the captured image, wherein the number of unit distance pixels of the captured image is a number of pixels included in a minimum unit that can be identified by a detection model in the captured image, and the number of unit distance pixels of the captured image is determined based on a resolution ratio value between the captured image and the number of unit distance pixels of the captured image.
7. The apparatus of claim 6, wherein the acquisition module is further configured to:
acquiring a physical focal length and imaging sensor parameters of the camera;
a pixel focal length of the camera is determined based on the physical focal length, the imaging sensor parameters, and the resolution of the captured image.
8. The apparatus of claim 6, wherein the acquisition module is further configured to:
determining a first internal reference matrix of the camera based on the acquired image and a camera calibration algorithm;
and acquiring the pixel focal length of the camera from the first internal reference matrix.
9. The apparatus of claim 6, wherein the number of unit distance pixels of the truncated image is obtained based on:
acquiring the number of unit distance pixels of a sample image in the detection model, wherein the detection model is used for detecting the target object, and the number of unit distance pixels of the sample image is the number of pixels included in a minimum unit which can be identified by the detection model in the sample image;
acquiring the resolution of the sample image, and determining a ratio value between the resolution of the sample image and the resolution of the truncated image;
and acquiring the unit distance pixel number of the intercepted image based on the unit distance pixel number of the sample image and the proportion value.
10. The apparatus of claim 8, wherein the determination module is further configured to:
determining a second internal reference matrix corresponding to the intercepted image based on the resolution of the intercepted image and the first internal reference matrix;
and determining the geographic position of the target object based on the second internal reference matrix and the external reference matrix corresponding to the intercepted image.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A roadside device comprising the electronic device of claim 11.
14. A cloud control platform comprising the electronic device of claim 11.
CN202110724108.5A 2021-06-29 2021-06-29 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment Active CN113470103B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110724108.5A CN113470103B (en) 2021-06-29 2021-06-29 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment
PCT/CN2021/135146 WO2023273158A1 (en) 2021-06-29 2021-12-02 Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110724108.5A CN113470103B (en) 2021-06-29 2021-06-29 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Publications (2)

Publication Number Publication Date
CN113470103A CN113470103A (en) 2021-10-01
CN113470103B true CN113470103B (en) 2023-11-24

Family

ID=77873630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110724108.5A Active CN113470103B (en) 2021-06-29 2021-06-29 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Country Status (2)

Country Link
CN (1) CN113470103B (en)
WO (1) WO2023273158A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470103B (en) * 2021-06-29 2023-11-24 阿波罗智联(北京)科技有限公司 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
JP2019066451A (en) * 2017-09-28 2019-04-25 キヤノン株式会社 Image measurement apparatus, image measurement method, imaging apparatus and program
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757830A (en) * 1996-02-07 1998-05-26 Massachusetts Institute Of Technology Compact micro-optical edge-emitting semiconductor laser assembly
CN113470103B (en) * 2021-06-29 2023-11-24 阿波罗智联(北京)科技有限公司 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
JP2019066451A (en) * 2017-09-28 2019-04-25 キヤノン株式会社 Image measurement apparatus, image measurement method, imaging apparatus and program
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
空间目标的激光主动成像系统性能分析;李迎春;装备指挥技术学院学报;第19卷(第1期);第67页 *

Also Published As

Publication number Publication date
CN113470103A (en) 2021-10-01
WO2023273158A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US8988317B1 (en) Depth determination for light field images
EP3627440B1 (en) Image processing method and apparatus
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
CN111862224B (en) Method and device for determining external parameters between camera and laser radar
KR102566998B1 (en) Apparatus and method for determining image sharpness
JP2017520050A (en) Local adaptive histogram flattening
TWI608221B (en) Liquid level detecting system and method thereof
CN111950543A (en) Target detection method and device
CN112991459B (en) Camera calibration method, device, equipment and storage medium
WO2019029573A1 (en) Image blurring method, computer-readable storage medium and computer device
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN112489140A (en) Attitude measurement method
CN111191619B (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN113470103B (en) Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment
CN110926342A (en) Crack width measuring method and device
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN113108919B (en) Human body temperature detection method, device and storage medium
CN116430069A (en) Machine vision fluid flow velocity measuring method, device, computer equipment and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN115683046A (en) Distance measuring method, distance measuring device, sensor and computer readable storage medium
CN113344906B (en) Camera evaluation method and device in vehicle-road cooperation, road side equipment and cloud control platform
KR20210134252A (en) Image stabilization method, device, roadside equipment and cloud control platform
CN113344906A (en) Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231008

Address after: 100176 Room 101, 1st floor, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Applicant after: Apollo Zhixing Technology (Guangzhou) Co.,Ltd.

Address before: 100176 Room 101, 1st floor, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing

Applicant before: Apollo Zhilian (Beijing) Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant