WO2022227020A1 - 一种图像处理方法以及装置 - Google Patents

一种图像处理方法以及装置 Download PDF

Info

Publication number
WO2022227020A1
WO2022227020A1 PCT/CN2021/091556 CN2021091556W WO2022227020A1 WO 2022227020 A1 WO2022227020 A1 WO 2022227020A1 CN 2021091556 W CN2021091556 W CN 2021091556W WO 2022227020 A1 WO2022227020 A1 WO 2022227020A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
angle
processed
vision sensor
camera
Prior art date
Application number
PCT/CN2021/091556
Other languages
English (en)
French (fr)
Inventor
徐文康
王振阳
赵亚西
田勇
黄为
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/091556 priority Critical patent/WO2022227020A1/zh
Priority to CN202180001504.3A priority patent/CN113348464A/zh
Publication of WO2022227020A1 publication Critical patent/WO2022227020A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the technical field of automatic driving, and in particular, to an image processing method and device.
  • the driver monitoring system can monitor the driver's fatigue state and driving behavior during the driving process. After finding the driver's fatigue, yawning, squinting and other wrong driving states, DMS will conduct a timely analysis of such behaviors and remind the driver to correct dangerous and wrong driving behaviors to ensure driving safety.
  • the shape of the cameras in the DMS will vary greatly due to the surrounding structure of the installation location. This leads to the need to customize the design of DMS cameras of different models and different positions, which makes the R&D cost of DMS cameras remain high.
  • Embodiments of the present application provide an image processing method and apparatus.
  • the device using the image processing method provided by the present application can be installed in different positions with a general structure, and there is no need to customize devices with different structures for different installation positions.
  • the camera assembly using the image processing method provided by the present application can be installed in different positions of different vehicle models with a common structure, which effectively saves the research and development cost and research and development cycle of the vehicle-mounted camera.
  • a first aspect of the present application provides an image processing method, including: acquiring an image to be processed, wherein the image to be processed is acquired by a visual sensor.
  • the visual sensor involved in this application refers to a device composed of one or more graphic sensors.
  • the visual sensor involved in this application can also be equipped with a light projector and other auxiliary equipment (such as a camera device including a processor) .
  • the graphic sensor may use a laser scanner, a camera, a radar sensor, etc., and the specific category of the graphic sensor is not limited in this application.
  • the processor calibrates the angle of the image to be processed according to the installation parameters of the vision sensor.
  • the processor and the vision sensor can be deployed on the same device, for example, both are deployed on the vehicle camera; the processor and the vision sensor can also be deployed on different devices, such as the vision sensor is deployed on the vehicle camera, the processor Deployed on the cloud server, or the vision sensor is deployed on the car camera, and the processor is deployed on the car machine or on-board computer.
  • the installation parameters of the vision sensor involved in this application can be used to indicate the offset angle of the vision sensor relative to each coordinate axis of the world coordinate system. In other words, the installation parameters reflect the relative position/attitude of the vision sensor in space.
  • the world coordinate system can be implemented by using various different coordinate systems.
  • the world coordinate system can be a space rectangular coordinate system (xyz coordinate system), or a geodetic coordinate system, etc., which is not limited in this application.
  • the angle at which the processor calibrates the image to be processed means that the processor rotates the image to be processed collected by the vision sensor, so that the target object included in the image to be processed can reach a preset calibration standard, wherein the preset calibration standard It can be set according to the actual application scenario.
  • the target object can be any type of object.
  • the images obtained by the DMS camera are used to detect the driving behavior and physiological state of the driver, and the target object can be a person or a human face.
  • the solution provided by the present application can reduce the restriction on the installation position of the vision sensor, as long as the sensing range of the vision sensor can include the target object.
  • the perception range of the vision sensor can include the target object, and the vision sensor can be installed at any angle, the image obtained by the vision sensor may be skewed, that is, the target object in the image obtained by the vision sensor is not vertical relative to the horizontal plane.
  • the processor calibrates the angle of the image to be processed according to the installation parameters of the vision sensor.
  • the image to be processed obtained by the vision sensor is rotated counterclockwise by the angle A to correct the image obtained by the vision sensor, that is, to make the image to be processed Objects in the image are vertical relative to the horizontal plane.
  • the solution provided in this application is applicable to any kind of vision sensor, and the solution provided in this application can reduce the restriction on the installation position of any kind of vision sensor.
  • calibrating the angle of the image to be processed according to the installation parameters of the vision sensor includes: calibrating the angle of the image to be processed according to the installation parameters of the vision sensor, so that the target in the image to be processed is The included angle between the object and the target coordinate axis is within a first preset range, and the target coordinate axis is a coordinate axis perpendicular to the horizontal plane.
  • the angle between the target object in the image to be processed and the target coordinate axis can be zero degrees, or the target object in the image to be processed and the target
  • the included angle between the coordinate axes is close to zero degrees, so that the object in the image to be processed is close to vertical with respect to the horizontal plane, which is beneficial to improve the prediction effect of the subsequent model that uses the image to be processed to perform the specified task.
  • the installation parameters include a reference angle
  • the reference angle is used to indicate the angle at which the installation surface of the vision sensor is deflected in the first direction relative to the horizontal plane
  • the calibration of the image to be processed is performed according to the installation parameters of the vision sensor.
  • the angle includes: rotating the image to be processed in a second direction by a target angle, the deviation between the target angle and the reference angle is within a preset range, and the second direction is opposite to the first direction.
  • a specific calibration method is given, which increases the variety of solutions.
  • the reference angle is updated to the reference angle determined by the current installation parameter. For example, obtain the external parameters of the vision sensor in the initial state. When it is obtained that the external parameters of the vision sensor change, the deviation between the external parameters in the initial state of the transformed external parameters is obtained. If the deviation exceeds the preset threshold, the reference angle is obtained according to the transformed external parameters of the vision sensor. In this embodiment, a specific way of obtaining the reference angle is provided, which increases the variety of solutions.
  • the reference angle is updated to the reference angle determined by the current installation parameter and the initial installation parameter. For example, obtain the external parameters of the vision sensor in the initial state. When it is obtained that the external parameters of the vision sensor change, the deviation between the external parameters in the initial state of the transformed external parameters is obtained. If the deviation exceeds the preset threshold, the reference angle is obtained according to the target result, and the target result is obtained by weighting the transformed external parameters and the external parameters in the initial state. In this embodiment, a specific way of obtaining the reference angle according to the external reference is provided, which increases the diversity of the solution.
  • the calibrated image to be processed is used to identify whether the image to be processed includes a human face.
  • the visual sensor is deployed on a target area of the vehicle, and the target area includes at least one of an A-pillar, an interior rearview mirror, a steering column, an instrument panel, and a center console.
  • the camera assembly may be a DMS camera or other vehicle-mounted cameras.
  • the vehicle-mounted cameras may be installed in different positions of different vehicle models with a common structure.
  • the reference angle is obtained according to an external parameter of the visual sensor, where the external parameter includes a yaw angle, a pitch angle and a roll angle of the visual sensor, wherein the yaw angle is used to indicate that the visual sensor orbits the world
  • a specific way of obtaining the reference angle is provided, which increases the diversity of the solution.
  • a configuration file including camera extrinsic parameters can be loaded, so that the DMS system can obtain the extrinsic parameters of the DMS camera and obtain the roll angle according to the extrinsic parameters of the DMS camera.
  • the image to be processed may be an image in any format, for example, the image to be processed may be a RAW image or an RGB image.
  • a RAW image to be processed is acquired.
  • the RAW image to be processed is preprocessed to obtain an image in a preset format, and the preprocessing includes at least one of auto focus, auto exposure, and auto white balance.
  • Pre-formatted images are calibrated according to the installation parameters.
  • the image to be processed may be a RAW image to be processed, and the image to be processed may also be an image after preprocessing the RAW image.
  • the visual sensor when it is a camera, it may be any type of camera, such as a time of flight (TOF) camera, a position sensor IF camera, and so on.
  • TOF time of flight
  • IF position sensor
  • a second aspect of the present application provides an electronic device, the electronic device includes an acquisition module and a calibration module, the acquisition module is used to acquire a to-be-processed image, wherein the to-be-processed image is acquired by a vision sensor; Install parameters to calibrate the angle of the image to be processed acquired by the acquisition module.
  • the calibration module is specifically configured to: calibrate the angle of the image to be processed according to the installation parameters of the vision sensor, so that the clip between the target object in the image to be processed and the target coordinate axis The angle is within a first preset range, and the target coordinate axis is a coordinate axis perpendicular to the horizontal plane.
  • the installation parameter includes a reference angle
  • the reference angle is used to indicate the angle at which the installation surface of the vision sensor is deflected in the first direction relative to the horizontal plane.
  • the target angle is rotated in the second direction, the deviation between the target angle and the reference angle is within a preset range, and the second direction is opposite to the first direction.
  • the reference angle is updated to the reference angle determined by the current installation parameter.
  • the reference angle is updated to the reference angle determined by the current installation parameter and the initial installation parameter.
  • the calibrated image to be processed is used to identify whether the image to be processed includes a human face.
  • the visual sensor is deployed on a target area of the vehicle, and the target area includes at least one of an A-pillar, an interior rearview mirror, a steering column, an instrument panel, and a center console.
  • the electronic device is one or more of a processor, a camera module, a vehicle-mounted device, a smart car, a cloud device, or a server.
  • the electronic device is a DMS camera
  • the image to be processed can be obtained through the DMS camera
  • the electronic device can be another vehicle camera, and the image to be processed can also be obtained through other vehicle cameras
  • the electronic device may also be a monitoring device, and the image to be processed may also be acquired through the monitoring device.
  • the electronic device is a terminal-side device; in some possible implementations, the electronic device may also be a cloud-side device, after the image is acquired by the terminal-side device (such as a DMS camera, a monitoring device) , and send the image to the cloud-side device.
  • the image to be processed is the image currently being processed by the cloud-side device.
  • a third aspect of the present application provides a smart car, where a camera assembly is deployed on the smart car, and the camera assembly is the electronic device described in the second aspect or any possible implementation manner of the second aspect.
  • a fourth aspect of the present application provides an electronic device, the electronic device includes a processor, the processor is coupled to a memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the first aspect or any one of the first aspects is implemented method described in a possible implementation.
  • the electronic device is one or more of a processor, a camera module, a vehicle-mounted device, a smart car, a cloud device, or a server.
  • a fifth aspect of the present application provides an electronic device, the electronic device includes an interface circuit, the interface circuit is coupled to a processor, and the interface circuit is configured to receive program instructions, so that the processor implements the first aspect or the first aspect when executing the program instructions received by the interface circuit.
  • the electronic device is one or more of a processor, a camera module, a vehicle-mounted device, a smart car, a cloud device, or a server.
  • a sixth aspect of the present application provides an electronic device comprising a processing circuit and a storage circuit, the processing circuit and the storage circuit being configured to perform the method as described in the first aspect or any possible implementation manner of the first aspect.
  • the electronic device is one or more of a processor, a camera module, a vehicle-mounted device, a smart car, a cloud device, or a server.
  • a seventh aspect of the present application provides a computer-readable storage medium, including a program, which, when executed on a computer, causes the computer to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • An eighth aspect of the present application provides a computer program product, which enables the computer to execute the method described in the first aspect or any possible implementation manner of the first aspect when the computer program product is run on a computer.
  • a ninth aspect of the present application provides a chip, which is coupled to a memory and configured to execute a program stored in the memory, so as to execute the method described in the first aspect or any possible implementation manner of the first aspect.
  • the solutions provided by the embodiments of the present application do not require additional hardware costs, and can be installed in different positions in the vehicle with a common structure without customizing different camera components for different installation positions, which effectively saves the R&D cost and cost of the DMS camera. research period. It should be noted that the camera assembly provided by the present application can be deployed not only in the car, but also outside the car, or in other scenarios, so as to reduce the limitation of the installation position on the installation of the camera assembly.
  • Figure 1 shows several in-vehicle cameras installed in different positions in the car
  • Figure 2 is a vehicle-mounted camera installed outside the vehicle
  • Fig. 3 is a schematic diagram of a vertical face and a non-vertical face
  • FIG. 4 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • 5 is a schematic diagram of Euler angles of a camera
  • FIG. 6 is a schematic flowchart of a joint calibration provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the deployment of a DMS camera according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of a vehicle provided by an embodiment of the application.
  • FIG. 13 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Embodiments of the present application provide an image processing method and apparatus.
  • the device using the image processing method provided by the present application can be installed at more installation angles, reducing the limitation of the installation space on the installation of the device. For example, the restrictions on the installation position of the in-vehicle camera are reduced, so that different models can use the same in-vehicle camera.
  • the in-vehicle camera provided by the present application can be installed in different positions of different models with a general structure, without customizing the in-vehicle camera of different shapes for different installation positions, which effectively saves the R&D cost and development cycle of the in-vehicle camera.
  • a typical application scenario of the method provided in this application is to apply it to a vehicle-mounted camera, where the vehicle-mounted camera includes a vehicle-mounted camera installed in the vehicle for monitoring the in-vehicle environment, such as monitoring the driver's status through a DMS camera,
  • a monitoring system (cockpit monitoring system, CMS) camera monitors the interior of the vehicle; the vehicle-mounted camera may also include a camera installed outside the vehicle to monitor the environment outside the vehicle, such as a driving recorder.
  • Figures 1 and 2 show several different in-vehicle cameras, in which sub-pictures a to c in Figure 1 show in-vehicle cameras installed at different positions in the car, and Figure 2 shows a vehicle installed outside the car. 's camera.
  • the DMS camera uses the images obtained by the cameras installed in the car to detect the driver's driving behavior and physiological state through visual tracking, target detection, motion recognition and other technologies. When wearing a seat belt and other dangerous situations, the system will alarm within the set time to avoid accidents.
  • the DSM system can effectively regulate the driver's driving behavior and greatly reduce the probability of traffic accidents.
  • the camera used for acquiring images in the DMS system is referred to as a DMS camera.
  • the model included in the DMS system needs to be trained.
  • a large number of face images are used as training samples, these training samples include faces in a fatigued state and faces in a non-fatigued state, and the model is iteratively trained through these training samples, so that the trained model can be based on the input data.
  • the face image determines whether the object corresponding to the face is in a fatigued state.
  • the training samples input to the model are usually images with vertical faces (the images with vertical faces refer to the plane on which the face is located is vertical with respect to the horizontal plane, refer to the subgraphs a and subgraphs in Figure 3).
  • sub-figures a to c in FIG. 1 show several DMS cameras installed in different positions in the vehicle. Specifically, when the DMS camera is installed at the first position of the first vehicle at an angle of A, the DMS camera can obtain a vertical image of the face, but when the DMS camera is installed at the first position of the second vehicle at an angle of A, the DMS camera will Unable to get a vertical image of the face. Therefore, for each different model, a DMS camera needs to be customized.
  • the lens assembly of the first camera, the printed circuit board (printed circuit board, PCB) and other hardware must be rotated correspondingly by an angle A, so that the After the adjusted first camera (hereinafter referred to as the A camera) is deployed in the first installation position of the first vehicle, a vertical image of the face can be obtained; for the second installation position of the second vehicle, the first camera is required to be installed.
  • the lens assembly, PCB board and other hardware corresponding to the B are rotated by the B angle, so that the adjusted first camera (hereinafter referred to as the B camera) can be deployed in the second installation position of the second vehicle to obtain a vertical image of the face. Since the A camera adjusts the angle of the hardware, the A camera can often only be deployed in the first installation position of the first vehicle, and the A camera may be completely unavailable in other occasions.
  • a stepping motor can be integrated on the vehicle camera body to realize the free rotation of the vehicle camera angle.
  • this method can solve the problem that the same vehicle camera can be deployed in different installation positions of different vehicles, but this method installs the stepper motor on the vehicle, which requires the stepper motor to meet the reliability, high temperature resistance, vibration resistance, etc. requirements, further increasing the cost of introducing a stepper motor.
  • the stepper motor is integrated on the vehicle camera, the size of the vehicle camera increases, which is not conducive to finding more space in the vehicle to deploy the camera, so this method is not ideal.
  • the present application provides an image processing method and a device applying the method.
  • the device may be one or more of a processor, a camera module, a vehicle-mounted device (such as a DMS camera, a CMS camera, and a driving recorder), a smart car, a cloud device, or a server.
  • the device applying the image processing method provided in this application does not need to increase additional hardware costs, and can be installed in different positions with a general structure, such as in different positions in the car, without customizing different shapes for different installation positions. shape. Effectively save the R&D cost and R&D cycle of the device, such as saving the R&D cost and R&D cycle of the vehicle camera.
  • the following description will be given in conjunction with specific embodiments.
  • FIG. 4 it is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • an image processing method provided by an embodiment of the present application includes the following steps:
  • the image to be processed is acquired by a vision sensor.
  • the image to be processed can be understood as the image currently collected by the vision sensor.
  • the visual sensor involved in this application refers to a device composed of one or more graphic sensors.
  • the visual sensor involved in this application can also be equipped with a light projector and other auxiliary equipment.
  • the graphic sensor may use a laser scanner, a camera, etc., and the specific category of the graphic sensor is not limited in this application.
  • the visual sensor may be a vehicle-mounted camera.
  • the vehicle-mounted camera has been described above, and the description will not be repeated here.
  • the visual sensor is a camera, it can be any type of camera, for example, it can be a camera using time of flight (TOF) technology, a camera using position sensor IF technology, and so on.
  • TOF time of flight
  • the image to be processed may be an image in any format, for example, the image to be processed may be a RAW image or an RGB image.
  • the image to be processed may be an original image, or an image after preprocessing the original image, for example, the image to be processed may be an image in RAW format, or an image in RAW format after preprocessing
  • the image, such as the image to be processed may be an image obtained after performing autofocus processing, automatic exposure processing, or automatic white balance processing on an image in RAW format.
  • the processor and the vision sensor can be deployed on the same device, for example, both are deployed on the vehicle camera; the processor and the vision sensor can also be deployed on different devices, such as the vision sensor is deployed on the vehicle camera, the processor Deployed on the cloud server, or the vision sensor is deployed on the car camera, and the processor is deployed on the car machine or on-board computer.
  • the installation parameters of the vision sensor involved in this application can be used to indicate the offset angle of the vision sensor relative to each coordinate axis of the world coordinate system.
  • the world coordinate system can be implemented by using various different coordinate systems.
  • the world coordinate system can be a space rectangular coordinate system (xyz coordinate system), a geodetic coordinate system, etc., which is not limited in this application.
  • the installation parameters of the vision sensor include yaw angle (yaw), pitch angle (pitch) and roll angle (roll), wherein yaw, pitch and roll may also be collectively referred to as Euler angles.
  • yaw is used to represent the rotation angle of the vision sensor around the y-axis, see Figure 5, where sub-figure a in Figure 5 represents the initial state, and sub-figure b in Figure 5 is a schematic diagram of the vision sensor rotated around the y-axis, pitch is used for The vision sensor represents the angle of rotation around the x-axis.
  • the sub-figure a in Figure 5 represents the initial state
  • the sub-figure d in Figure 5 is a schematic diagram of the vision sensor after it is rotated around the x-axis.
  • the roll is used to represent the rotation angle of the vision sensor around the z-axis.
  • the sub-picture a in FIG. 5 represents the initial state
  • the sub-picture c in FIG. 5 is a schematic diagram of the vision sensor rotated around the z-axis.
  • the installation parameters of the vision sensor can also be used to indicate the installation height of the vision sensor or other indications that the vision sensor is in space. parameters of the pose.
  • the solution provided by this application can obtain the installation parameters of the vision sensor in various ways.
  • the following describes how to obtain the installation parameters of the vision sensor with reference to several specific implementations:
  • the installation parameters of the vision sensor may be obtained by obtaining external parameters of the vision sensor.
  • a configuration file including the car camera external parameters can be loaded, so that the car camera system can obtain the car camera's external parameters.
  • the car camera's external parameters are the installation of the visual sensor. parameter.
  • any method that can obtain the external parameters of the visual sensor can be used in the embodiments of the present application, for example, the checkerboard calibration method can be used, and a gravity sensor can be assembled on the visual sensor.
  • the external parameters of the visual sensor are obtained according to the gravity sensor, etc., which are not limited in the embodiments of the present application.
  • the installation parameters of the vision sensor can be adjusted, and the installation parameters of the vision sensor can be updated by means of online measurement.
  • the installation parameters of the vision sensor After the car has been used in the factory for a period of time, due to the influence of external factors such as long-term high-frequency vibration, collision and scratching, the relative relationship between the spatial positions of the sensors is shifted.
  • the installation parameters of the vision sensor may be updated by means of online calibration.
  • any method that can obtain the external parameters of the visual sensor automatically can be used in the embodiments of the present application.
  • online calibration can be performed according to external reference objects such as lane lines, or the camera motion
  • an offline and online joint calibration method can also be used.
  • obtain the external parameters of the visual sensor in the initial state Eulerian angles: pitch, yaw, roll
  • obtain the reference angle is obtained according to the external parameters of the transformed vision sensor.
  • the external parameters of the visual sensor obtained by offline calibration are used as the benchmark, and the external parameters of the visual sensor are obtained by online calibration at preset time intervals.
  • the external parameters of the visual sensor obtained by online calibration and offline calibration If the external parameters of the visual sensor obtained by online calibration and offline calibration If the deviation between the acquired external parameters of the visual sensor exceeds the preset threshold, the external parameters of the visual sensor acquired by the offline calibration method are corrected.
  • the external parameters of the visual sensor obtained by the online calibration method can be directly used to update the external parameters of the visual sensor obtained by the offline method, and the updated external parameters are used as the final external parameters of the visual sensor;
  • weighting processing may be performed on the external parameters of the visual sensor obtained by the online calibration method and the external parameters of the visual sensor obtained by the offline method, and the result of the weighting processing may be used as the final external parameters of the visual sensor.
  • the angle at which the processor calibrates the image to be processed means that the processor rotates the image to be processed collected by the vision sensor, so that the target object included in the image to be processed can reach a preset calibration standard, wherein the preset calibration standard It can be set according to the actual application scenario.
  • the preset calibration standard can be set so that the angle between the target object in the image to be processed and the target coordinate axis is within a first preset range, and the target coordinate axis is Axis perpendicular to the horizontal plane.
  • the first preset range can be set according to the actual situation. For example, the first preset range can be set to be small enough.
  • the first preset range may be set to be slightly larger, such as -7° to +7°. It should be noted that the angle between the target object and the target coordinate axis is within the first preset range, and the target object is upright.
  • the target object when the target object is a human face, when the included angle between the target object and the target coordinate axis is within the first preset range, the target object is a positive human face instead of an upside-down human face.
  • the coordinate axis where the horizontal plane is located is the x-axis
  • the target coordinate axis is a part of the y-axis
  • the target coordinate axis is only used to divide the first and second quadrants, not for division.
  • the third quadrant and the fourth quadrant can be understood with reference to the target coordinate axis and the horizontal plane shown in FIG. 3 .
  • the angle between the target object and the target coordinate axis can be obtained in various ways.
  • connection line between the center point of the target object and the origin of the coordinate system can be obtained, and the angle between the connection line and the target coordinate axis can be obtained.
  • the solution provided in this application makes the angle between the target object in the image to be processed and the target coordinate axis within the first preset range.
  • you can According to the same principle described in this application the angle between the target object in the image to be processed and the horizontal plane is within the second preset range, or the angle between the target object in the image to be processed and other coordinate axes Within the third preset range, as long as the target object in the calibrated image to be processed is upright or nearly upright.
  • the sub-image a and the sub-image b in FIG. 7 are described in conjunction. Assume that the sub-image a in FIG. 7 is an image to be processed. After the processor performs step 402 on the image to be processed, the sub-image b in FIG. 7 can be obtained. .
  • the target object can be any type of object.
  • the images obtained by the DMS camera are used to detect the driving behavior and physiological state of the driver, and the target object can be a person or a human face.
  • the solution provided by the present application can reduce the restriction on the installation position of the vision sensor, as long as the sensing range of the vision sensor can include the target object.
  • the vision sensor can be installed at any angle.
  • the installation parameters include a reference angle
  • the reference angle is used to indicate the angle at which the installation surface of the vision sensor is deflected in the first direction relative to the horizontal plane
  • the angle of the image to be processed is calibrated according to the installation parameters of the vision sensor, including: The image to be processed is rotated toward the second direction by the target angle, the deviation between the target angle and the reference angle is within a preset range, and the second direction is opposite to the first direction.
  • the processor rotates the image to be processed by the target angle in the second direction, so that the target object in the image to be processed is vertical with respect to the horizontal plane.
  • the perception range of the vision sensor can include the target object, and the vision sensor can be installed at any angle
  • the image obtained by the vision sensor may be skewed, that is, the target object in the image obtained by the vision sensor is not vertical relative to the horizontal plane.
  • the installation parameters of the vision sensor are obtained, for example, the deflection angle of the installation surface of the vision sensor relative to the horizontal plane is obtained, and if the angle A is rotated clockwise, the image to be processed obtained by the vision sensor is rotated counterclockwise by A angle, so as to correct the image acquired by the vision sensor, that is, to make the object in the image to be processed be vertical with respect to the horizontal plane, which can be understood with reference to the sub-figure b in FIG. 7 .
  • the first direction and the target angle may be stored in the electronic device in advance, or the first direction and the target angle may be obtained from the cloud-side device, so that the processor obtains the first direction and target angle from the memory of the electronic device Aspect and target angle, rotate the image to be processed by the target angle in the first direction, so that the target object in the image to be processed is vertical with respect to the horizontal plane.
  • the visual sensor is a vehicle-mounted camera, for example, the electronic device is a DMS camera
  • the electronic device is a DMS camera
  • Yaw, pitch and roll are all 0°
  • the target object in the image captured by the visual sensor is relative to the ground.
  • the image captured by the vision sensor can be rotated according to the external parameters of the vision sensor, so that the target object in the image captured by the vision sensor is vertical relative to the ground.
  • the yaw and pitch of the DMS camera can be both 0° or close to 0°. Then, it is only necessary to adjust the image obtained by the DMS camera according to the roll of the DMS camera.
  • the yaw and pitch of the DMS camera can be both 0° or close to 0°.
  • the optical center of the DMS camera may be directed to the driver's face.
  • the tip of the nose (the corresponding position of the tip of the driver's nose can be estimated based on experience or statistics).
  • (x, y) is the coordinate of any point in the image before rotation
  • (x', y') is the coordinate corresponding to the point after rotation
  • (x0, y0) is the rotation center
  • the rotation center can be understood as the image The coordinate point corresponding to the center.
  • Camera roll ⁇ . Then the image needs to rotate the roll counterclockwise to ensure that the face is vertical.
  • the DMS system can automatically load the file including the external parameters of the DMS camera after startup (power-on), and the process of solving the external parameters has been introduced above, here It will not be repeated.
  • the image collected by the DMS camera can be preprocessed by an image signal processor (image signal processing, ISP).
  • ISP image signal processing
  • the preprocessing process has also been introduced above, and will not be repeated here.
  • the ISP can also rotate the image so that the face in the image is vertical relative to the horizontal plane. In a possible implementation manner, the ISP performs rotation processing on the image according to the roll in the external parameter of the DMS camera, that is, rotates the roll counterclockwise.
  • the ISP may be deployed in the DMS camera, and in another possible implementation manner, the ISP may also be deployed in the DMS camera, for example, referring to FIG. 10 , the ISP may be deployed in a system-on-chip (system on chip, SOC).
  • SOC system on chip
  • a typical application scenario of the solution provided in this application is that the DMS camera is installed in different positions of different vehicle models with a general structure.
  • the cameras in sub-picture a to sub-picture c in Figure 1 can be the same type of camera, and the camera in sub-picture a can only be adapted to the model in sub-picture a and the installation position in sub-picture a.
  • the solution provided in this application does not rotate the hardware of the camera, but rotates the acquired image by the target angle, so that the target object in the image is vertical relative to the horizontal plane, and the specific target angle can be based on the roll of the camera. definite.
  • the image processing method provided by the embodiment of the present application has been introduced above.
  • different vehicle models can use the same camera without customizing different shapes of camera components for different installation positions. , effectively saving the R&D cost and R&D cycle of the camera.
  • the solution provided in this application is mainly described by taking the vehicle camera as an example, but the solution provided in this application is applicable to any kind of camera, and the solution provided in this application can reduce the need for any kind of camera. Installation location restrictions.
  • the device for executing the above-mentioned method includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the electronic device includes a processor 1001 , a memory 1002 and a vision sensor 1004 .
  • a communication interface 1003 may also be included (in this application, the communication interface is also sometimes referred to as an interface circuit).
  • the processor 1001 includes but is not limited to a central processing unit (CPU), a network processor (NP), an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device). , PLD) one or more.
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL) or any combination thereof.
  • Memory 1002 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM), or other type of static storage device that can store information and instructions It can also be an electrically erasable programmable read-only memory (electrically programmable read-only memory, EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, Optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of Any other medium that can be accessed by a computer, but is not limited to this.
  • the memory 1002 can be used to store external parameters of the camera.
  • the communication interface 1003 may use any device such as a transceiver for communicating with other devices or a communication network, such as communicating with a cloud device.
  • the communication interface 1003 may use technologies such as Ethernet, radio access network (RAN), and wireless local area networks (WLAN) to communicate with other devices.
  • RAN radio access network
  • WLAN wireless local area networks
  • the above-mentioned processor 1001, memory 1002, communication interface 1003 and visual sensor 1004 jointly implement the method described in the above-mentioned embodiment corresponding to FIG. 4 .
  • the visual sensor 1004 is used to execute the relevant content of step 401 in the embodiment corresponding to FIG. 4
  • the memory 1002 is used to store an instruction, so that when the processor 1001 executes the instruction, step 402 in the embodiment corresponding to FIG. 4 is executed related content.
  • the communication interface 1003 is configured to receive an instruction, so that when the processor 1001 executes the instruction, the relevant content of step 402 in the embodiment corresponding to FIG. 4 is executed.
  • the memory 1002 is used to store the external parameters of the lens assembly, so that after the processor 1001 retrieves the external parameters of the lens assembly from the memory 1002, the related content of step 402 is executed.
  • the electronic device in FIG. 11 may be a smart car.
  • FIG. 12 it is a schematic structural diagram of a smart car provided by the present application.
  • Smart vehicle 100 may include various subsystems, such as travel system 102 , sensor system 104 , one or more peripherals 108 , as well as power supply 110 , computer system 112 , and user interface 116 .
  • the smart vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. Additionally, each of the subsystems and components of the smart car 100 may be wired or wirelessly interconnected.
  • the travel system 102 may include components that provide powered motion for the smart vehicle 100 .
  • travel system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels 121 .
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine.
  • Engine 118 converts energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the smart vehicle 100 .
  • Transmission 120 may transmit mechanical power from engine 118 to wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft. In one embodiment, transmission 120 may also include other devices, such as clutches.
  • the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the smart vehicle 100 .
  • the sensor system 104 may include a global positioning system 122 (the positioning system may be a global positioning GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser ranging instrument 128 and camera 130.
  • the sensor system 104 may also include sensors that monitor the internal systems of the smart vehicle 100 (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensing data from one or more of these sensors can be used to detect people and their corresponding characteristics (position, shape, orientation, speed, etc.).
  • the sensor system further includes a vehicle-mounted camera, such as a DMS camera, for acquiring the environment in the vehicle, such as acquiring the faces of passengers or drivers, and judging the status of the passengers and drivers.
  • a vehicle-mounted camera such as a DMS camera
  • the positioning system 122 can be used to estimate the geographic location of the smart car 100 .
  • the IMU 124 is used to sense changes in the position and orientation of the smart car 100 based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 can sense objects in the surrounding environment of the smart car 100 by using radio signals, and can specifically be expressed as a millimeter-wave radar or a lidar. In some embodiments, in addition to sensing objects, radar 126 may be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may utilize laser light to sense objects in the environment in which the smart vehicle 100 is located.
  • laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the smart car 100 .
  • Camera 130 may be a still camera or a video camera.
  • a control system 106 may also be included, and the control system 106 controls the operation of the smart vehicle 100 and its components.
  • Control system 106 may include various components including steering system 132 , throttle 134 , braking unit 136 , computer vision system 140 , line control system 142 , and obstacle avoidance system 144 .
  • the steering system 132 is operable to adjust the heading of the smart car 100 .
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the smart car 100 .
  • the braking unit 136 is used to control the deceleration of the smart car 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow down the wheels 121 to control the speed of the smart car 100 .
  • Computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding smart vehicle 100 .
  • Objects and/or features may include traffic signals, road boundaries, and obstacles.
  • Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the travel route and travel speed of the smart car 100 .
  • the computer vision system 140 may perform rotation processing on the image captured by the vision sensor according to the external parameters of the vision sensor.
  • the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, respectively, for combining information from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps The data is used to determine the driving route and driving speed for the smart car 100.
  • Obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise overcome obstacles in the environment of smart car 100 , which may be embodied as actual obstacles and virtual moving bodies that may collide with smart car 100 .
  • the control system 106 may additionally or alternatively include components in addition to those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
  • peripheral device 108 provides a means for a user of smart car 100 to interact with user interface 116 .
  • the onboard computer 148 may provide information to the user of the smart car 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripheral devices 108 may provide a means for smart car 100 to communicate with other devices located within the vehicle.
  • Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications such as code division multiple access (CDMA), EVDO, global system for mobile communications (GSM), general packet radio service, GPRS), or 4G cellular communications, such as long term evolution (LTE) or 5G cellular communications.
  • the wireless communication system 146 may communicate using a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
  • DSRC dedicated short range communications
  • the power supply 110 may provide power to various components of the smart car 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the smart vehicle 100 .
  • power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as memory 114 .
  • Computer system 112 may also be multiple computing devices that control individual components or subsystems of smart vehicle 100 in a distributed fashion.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU).
  • the processor 113 may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • processors, memory, and other components of the computer system 112 may actually include not stored in the same Multiple processors, or memories, within a physical enclosure.
  • memory 114 may be a hard drive or other storage medium located within a different enclosure than computer system 112 . Accordingly, references to processor 113 or memory 114 will be understood to include references to sets of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
  • the processor 113 may be located remotely from the smart car 100 and communicate wirelessly with the smart car 100 . In other aspects, some of the processes described herein are performed on the processor 113 disposed within the smart vehicle 100 while others are performed by the remote processor 113, including taking the necessary steps to perform a single maneuver.
  • memory 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of smart car 100, including those described above. Memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of travel system 102 , sensor system 104 , control system 106 , and peripherals 108 . instruction.
  • memory 114 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, among other information. Such information may be used by the smart car 100 and the computer system 112 during operation of the smart car 100 in autonomous, semi-autonomous and/or manual modes.
  • user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as wireless communication system 146 , onboard computer 148 , microphone 150 and speaker 152 .
  • Computer system 112 may control the functions of smart vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 .
  • computer system 112 may utilize input from control system 106 to control steering system 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144 .
  • computer system 112 is operable to provide control of various aspects of smart vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the smart vehicle 100 separately.
  • memory 114 may exist partially or completely separate from smart car 100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 12 should not be construed as a limitation on the embodiments of the present application.
  • the electronic device may have more or less components than those shown, may combine two or more components, or may have different components Configuration implementation.
  • the visual sensor 1004 in FIG. 11 can be regarded as an acquisition module
  • the processor 1001 can be regarded as a processing module
  • the communication interface 1003 can be regarded as a communication module
  • the memory 1002 can be regarded as a storage module.
  • this application does not limit the specific name of the module, for example, the processing module may also be called a calibration module.
  • the present application further provides an electronic device, referring to FIG. 13 , including an acquisition module 1201 , a storage module 1202 and a processing module 1203 .
  • Embodiments of the present application also provide a chip, where the chip includes: a processing unit, and the processing unit may be, for example, a processor.
  • the processing unit can execute the computer-executable instructions stored in the storage unit, so that the chip executes the method described in FIG. 4 above.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as only Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
  • ROM Read-only memory
  • RAM random access memory
  • the aforementioned processing unit or processor may be a central processing unit (CPU), a network processor (neural-network processing unit, NPU), a graphics processing unit (graphics processing unit, GPU), a digital signal processing digital signal processor (DSP), application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or it may be any conventional processor or the like.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit, which can be located in one place or distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
  • U disk U disk
  • mobile hard disk ROM
  • RAM random access memory
  • disk or CD etc.
  • a computer device which can be a personal computer, server, or network device, etc. to execute the methods described in the various embodiments of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium, where a program for training a model is stored in the computer-readable storage medium, and when it runs on a computer, the computer executes the method described in FIG. 3 .
  • the embodiments of the present application also provide a digital processing chip.
  • the digital processing chip integrates circuits and one or more interfaces for realizing the above-mentioned processor or the functions of the processor.
  • the digital processing chip can perform the method steps of any one or more of the foregoing embodiments.
  • the digital processing chip does not integrate the memory, it can be connected with the external memory through the communication interface.
  • the digital processing chip implements the actions performed by the electronic device or the camera in the above embodiment according to the program code stored in the external memory.
  • Embodiments of the present application also provide a computer program product, where the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, data center, etc., which includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • modules may be combined or integrated into another system, or some features may be ignored.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some ports, and the indirect coupling or communication connection between modules may be electrical or other similar forms.
  • the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed into multiple circuit modules, and some or all of them may be selected according to actual needs. module to achieve the purpose of the solution of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种图像处理方法以及装置,方法包括:获取待处理图像,其中,待处理图像是由视觉传感器采集得到的。根据视觉传感器的安装参数,校准待处理图像的角度。可以使应用了本申请提供的方法的装置以通用的结构安装于不同位置,而无需针对不同安装位置,定制不同形状的装置,有效节约装置的研发成本和研发周期。

Description

一种图像处理方法以及装置 技术领域
本申请涉及自动驾驶技术领域,尤其涉及一种图像处理方法以及装置。
背景技术
驾驶员监控系统(driver monitoring system,DMS)能够在驾驶员行驶过程中,监测驾驶员的疲劳状态、驾驶行为等。在发现驾驶员出现疲劳、打哈欠、眯眼睛及其他错误驾驶状态后,DMS将会对此类行为进行及时的分析,并提醒驾驶员纠正危险和错误驾驶行为,保障行车安全。
DMS中的摄像头安装在不同车型或者车的不同位置上时,受限于安装位置的周边结构,DMS的摄像头的形态会产生较大的差异。这导致不同车型、不同位置的DMS摄像头都需要进行定制化的设计,使DMS的摄像头的研发成本居高不下。
发明内容
本申请实施例提供一种图像处理方法以及装置。采用了本申请提供的图像处理方法的装置,可以以通用的结构安装于不同位置,而无需针对不同安装位置,定制不同结构的装置。比如,采用了本申请提供的图像处理方法的摄像组件,可以以通用的结构安装于不同车型的不同位置,有效节约车载摄像头的研发成本和研发周期。
本申请第一方面提供一种图像处理的方法,包括:获取待处理图像,其中,待处理图像由视觉传感器采集得到。本申请涉及的视觉传感器是指由一个或者多个图形传感器组成的器件,本申请涉及的视觉传感器除了包括图形传感器,还可以配以光投射器及其他辅助设备(比如包括处理器的摄像设备)。其中,图形传感器可以使用激光扫描器、摄像头、雷达传感器等,本申请对图形传感器的具体类别并不进行限定。处理器根据视觉传感器的安装参数,校准待处理图像的角度。其中,处理器和视觉传感器可以部署在同一个设备上,比如二者都部署在车载摄像头上;处理器和视觉传感器也可以部署在不同的设备上,比如视觉传感器部署在车载摄像头上,处理器部署在云端服务器上,或者,视觉传感器部署在车载摄像头上,处理器部署在车机或车载电脑上。本申请中涉及的视觉传感器的安装参数可以用于指示视觉传感器相对于世界坐标系的各个坐标轴的偏移角度,换句话说,安装参数反映了视觉传感器在空间上的相对位置/姿态。其中,世界坐标系可以采用多种不同的坐标系来实现,比如世界坐标系可以是空间直角坐标系(xyz坐标系),或者大地坐标系等等,本申请对此并不进行限定。处理器校准待处理图像的角度,是指处理器对视觉传感器采集到的待处理图像进行旋转,以使待处理图像中包括的目标对象能够达到预设的校准标准,其中,预设的校准标准可以根据实际的应用场景进行设定。目标对象可以是任意一种类型的对象,比如对于DMS摄像头,其获取的图像,用于对驾驶员的驾驶行为及生理状态进行检测,则目标对象可以是人物,或者是人脸。本申请提供的方案可以减少对视觉传感器安装位置的限制,视觉传感器的感知范围能够包括目标对象即可。在视觉传感器的感知范围能够包括目标对象的前提下,视觉传感器可以以任意角度安装,则可能会出现视觉传感器获取的图像是歪的,即视觉传感器获取的图像中目标对象相对于水平面不是垂直的。通过本申请提供的方案,处理器根据视觉传感器的安装参数,校准待处理图像的角度。比 如获取视觉传感器的安装面相对于水平面的偏转角,假设顺时针旋转了A角度,则对该视觉传感器获取的待处理图像逆时针旋转A角度,以将视觉传感器获取的图像转正,即让待处理图像中的对象相对于水平面是垂直的。本申请提供的方案对于任意一种视觉传感器都是适用的,本申请提供的方案可以降低对任意一种视觉传感器安装位置的限制。
在第一方面的一种可能实现方式中,根据视觉传感器的安装参数,校准待处理图像的角度,包括:根据视觉传感器的安装参数,校准待处理图像的角度,以使得待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,该目标坐标轴是与水平面垂直的坐标轴。在这种实施方式中,给出了一种具体的校准标准,比如可以使待处理图像中的目标对象与目标坐标轴之间的夹角为零度,或者使待处理图像中的目标对象与目标坐标轴之间的夹角接近零度,以使待处理图像中的对象相对于水平面是接近垂直的,有利于提升后续利用待处理图像执行指定任务的模型的预测效果。
在第一方面的一种可能实现方式中,安装参数包括参考角度,参考角度用于指示视觉传感器的安装面相对于水平面向第一方向偏转的角度,根据视觉传感器的安装参数,校准待处理图像的角度,包括:将待处理图像向第二方向旋转目标角度,目标角度和参考角度之间的偏差在预设范围内,第二方向和第一方向相反。在这种实施方式中,给出了一种具体的校准方式,增加了方案的多样性。
在第一方面的一种可能实现方式中,在视觉传感器的当前安装参数与初始安装参数之间的偏差超过预设阈值时,参考角度更新为由当前安装参数确定的参考角度。比如,获取初始状态下视觉传感器的外参。获取到视觉传感器的外参发生变化时,获取变换后的外参与初始状态下的外参之间的偏差。若偏差超过预设阈值,则根据变换后的视觉传感器的外参获取参考角度。在这种实施方式中,给出了一种具体的根据获取参考角度的方式,增加了方案的多样性。
在第一方面的一种可能实现方式中,在视觉传感器的当前安装参数与初始安装参数的偏差超过预设阈值时,参考角度更新为由当前安装参数和初始安装参数确定的参考角度。比如,获取初始状态下视觉传感器的外参。获取到视觉传感器的外参发生变化时,获取变换后的外参与初始状态下的外参之间的偏差。若偏差超过预设阈值,则根据目标结果获取参考角度,目标结果是对变换后的外参和初始状态下的外参进行加权处理后获取的。在这种实施方式中,给出了一种具体的根据外参获取参考角度的方式,增加了方案的多样性。
在第一方面的一种可能实现方式中,校准后的待处理图像用于识别待处理器图像中是否包括人脸。
在第一方面的一种可能实现方式中,视觉传感器部署在车辆的目标区域上,目标区域包括A柱、内后视镜、转向柱、仪表盘、中控台中的至少一种。这种实施方式中,摄像头组件可以是DMS摄像头或者其他车载摄像头,通过本申请提供的方案,车载摄像头可以以通用的结构安装于不同车型的不同位置。
在第一方面的一种可能实现方式中,根据视觉传感器的外参获取参考角度,外参包括视觉传感器的偏航角、俯仰角和翻滚角,其中,偏航角用于表示视觉传感器绕世界坐标系的X轴旋转的角度,俯仰角用于表示视觉传感器绕世界坐标系的Y轴旋转的角度,翻滚角用于表示视觉传感器绕世界坐标系的Z轴旋转的角度,其中,X轴和Y轴组成的平面的方 向指向视觉传感器的待感知区域,偏航角和俯仰角的角度均小于第一预设阈值,翻滚角为参考角度。在这种实施方式中,给出了一种具体的获取参考角度的方式,增加了方案的多样性。以DMS摄像头为例,当DMS系统上电运行时,可以加载包括摄像头外参的配置文件,进而使DMS系统获取到DMS摄像头的外参,并根据DMS摄像头的外参获取翻滚角。
在第一方面的一种可能实现方式中,待处理图像可以是任意一种格式的图像,比如待处理图像可以是RAW图像或者RGB图像。
在第一方面的一种可能实现方式中,获取待处理RAW图像。对待处理RAW图像进行预处理,以获取预设格式的图像,预处理包括自动对焦、自动曝光、自动白平衡中的至少一种。根据安装参数对预设格式的图像进行校准。在这种实施方式中,待处理图像可以是待处理RAW图像,待处理图像还可以是对RAW图像进行预处理后的图像。
在第一方面的一种可能实现方式中,视觉传感器是摄像头时,可以是任意一种类型的摄像头,比如可以是飞行时间(time of flight,TOF)摄像头,位置传感器IF摄像头等等。
本申请第二方面提供一种电子设备,电子设备包括获取模块和校准模块,获取模块,用于获取待处理图像,其中,待处理图像由视觉传感器采集得到;校准模块,用于根据视觉传感器的安装参数,校准获取模块获取的待处理图像的角度。
在第二方面的一种可能实现方式中,校准模块,具体用于:根据视觉传感器的安装参数,校准待处理图像的角度,以使得待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,该目标坐标轴是与水平面垂直的坐标轴。
在第二方面的一种可能实现方式中,安装参数包括参考角度,参考角度用于指示视觉传感器的安装面相对于水平面向第一方向偏转的角度,校准模块,具体用于:将待处理图像向第二方向旋转目标角度,目标角度和参考角度之间的偏差在预设范围内,第二方向和第一方向相反。
在第二方面的一种可能实现方式中,在视觉传感器的当前安装参数与初始安装参数之间的偏差超过预设阈值时,参考角度更新为由当前安装参数确定的参考角度。
在第二方面的一种可能实现方式中,在视觉传感器的当前安装参数与初始安装参数的偏差超过预设阈值时,参考角度更新为由当前安装参数和初始安装参数确定的参考角度。
在第二方面的一种可能实现方式中,校准后的待处理图像用于识别待处理器图像中是否包括人脸。
在第二方面的一种可能实现方式中,视觉传感器部署在车辆的目标区域上,目标区域包括A柱、内后视镜、转向柱、仪表盘、中控台中的至少一种。
在第二方面的一种可能实现方式中,电子设备是处理器、摄像模组、车载装置、智能车、云端设备或者服务器中的一个或者多个。比如,该电子设备是DMS摄像头,则可以通过DMS摄像头获取待处理图像;在一些可能的实施方式中,该电子设备可以是其他车载摄像头,则也可以通过其他车载摄像头获取待处理图像;在一些可能的实施方式中,该电子设备还可以是监控设备,则还可以通过监控设备获取待处理图像。在一些可能的实施方式中,该电子设备是端侧设备;在一些可能的实施方式中,该电子设备还可以是云侧设备,由端侧设备(比如DMS摄像头、监控设备)获取了图像后,将图像发送至云侧设备,这种 情况中,待处理图像是云侧设备当前进行图像处理的图像。
本申请第三方面提供一种智能车,智能车上部署有摄像组件,摄像组件是第二方面或第二方面任一种可能的实施方式中描述的电子设备。
本申请第四方面提供一种电子设备,电子设备包括处理器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现第一方面或第一方面任一种可能的实施方式中描述的方法。
在第四方面的一种可能实现方式中,该电子设备是处理器、摄像模组、车载装置、智能车、云端设备或者服务器中的一个或者多个。
本申请第五方面提供一种电子设备,电子设备包括接口电路,接口电路和处理器耦合,接口电路用于接收程序指令,以使处理器执行接口电路接收的程序指令时实现第一方面或第一方面任一种可能的实施方式中描述的方法。
在第五方面的一种可能实现方式中,该电子设备是处理器、摄像模组、车载装置、智能车、云端设备或者服务器中的一个或者多个。
本申请第六方面提供一种电子设备,该电子设备包括处理电路和存储电路,处理电路和存储电路被配置为执行如第一方面或第一方面任一种可能的实施方式中描述的方法。
在第六方面的一种可能实现方式中,该电子设备是处理器、摄像模组、车载装置、智能车、云端设备或者服务器中的一个或者多个。
本申请第七方面提供一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如第一方面或第一方面任一种可能的实施方式中描述的方法。
本申请第八方面提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机可以执行如第一方面或第一方面任一种可能的实施方式中描述的方法。
本申请第九方面提供一种芯片,该芯片与存储器耦合,用于执行存储器中存储的程序,以执行如第一方面或第一方面任一种可能的实施方式中描述的方法。
对于本申请实施例第二方面至第九方面的各种可能实现方式中步骤的具体实现方式、每种可能实现方式中名词的具体含义,以及每种可能实现方式所带来的有益效果,均可以参考第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
本申请实施例提供的方案,不需要增加额外的硬件成本,可以以通用的结构安装于车内的不同位置,而无需针对不同安装位置,定制不同的摄像组件,有效节约DMS摄像头的研发成本和研发周期。需要说明的是,本申请提供的摄像组件不只可以部署在车内,还可以布置在车外,或者其他场景,以减少安装位置对于安装摄像头组件的限制。
附图说明
图1为几种安装于车内不同位置的车载摄像头;
图2为安装于车外的车载摄像头;
图3为人脸垂直和人脸不垂直的示意图;
图4为本申请实施例提供的一种图像处理方法的流程示意图;
图5为摄像头的欧拉角的示意图;
图6为本申请实施例提供的一种联合标定的流程示意图;
图7为本申请实施例提供的一种图像处理方法的效果图;
图8为本申请实施例提供的一种DMS摄像头的部署示意图;
图9为本申请实施例提供的一种图像处理方法的流程示意图;
图10为本申请实施例提供的一种图像处理方法的流程示意图;
图11为本申请实施例提供的一种电子设备的结构示意图;
图12为本申请实施例提供的车辆的一种结构示意图;
图13为本申请实施例提供的电子设备的另一种结构示意图。
具体实施方式
本申请实施例提供了一种图像处理方法以及装置。采用了本申请提供的图像处理方法的装置,可以以更多的安装角度进行安装,减少安装空间对于装置安装的限制。比如,减少车载摄像头的安装位置的限制,使不同的车型可以采用同一款车载摄像头。本申请提供的车载摄像头可以以通用的结构安装于不同车型的不同位置,而无需针对不同安装位置,定制不同形状的摄车载摄像头,有效节约车载摄像头的研发成本和研发周期。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请提供的方法的一个典型的应用场景为应用到车载摄像头上,其中,车载摄像头包括安装于车内的车载摄像头,用于监控车内环境,比如通过DMS摄像头监控驾驶员的状态,通过座舱监控系统(cockpit monitoring system,CMS)摄像头监控车辆内部的情况;车载摄像头还可以包括安装于车外的摄像头,用于监控车外的环境,比如行车记录仪。如图1和图2展示了几种不同的车载摄像头,其中图1中的子图a至子图c展示了安装于车内不同位置的车内摄像头,图2展示了一种安装于车外的摄像头。
下面以DMS摄像头为例,对车载摄像头的安装位置受到安装空间的限制进行介绍。DMS系统利用车内安装的摄像头获取的图像,通过视觉跟踪、目标检测、动作识别等技术对驾驶员的驾驶行为及生理状态进行检测,当驾驶员发生疲劳、分心、打电话、抽烟、未系安全带等危险情况时,在系统设定时间内报警以避免事故发生。DSM系统能有效规范驾驶员的驾驶行为、大大降低交通事故发生的几率。本申请将DMS系统中用于获取图像的摄像头称为DMS摄像头。为了使DMS系统能够对驾驶员的驾驶行为及生理状态进行检测,需要对DMS系统包括的模型进行训练。比如,将大量人脸图像作为训练样本,这些训练样本包括处于疲劳状态的人脸,和没有处于疲劳状态的人脸,通过这些训练样本对模型进行迭代训练,使训练后的模型可以根据输入的人脸图像判断该人脸对应的对象是否处于疲劳状态。由于在模型的训练阶段,输入至模型的训练样本通常都是人脸垂直的图像(人脸垂直的图像是指人脸所在的平面相对于水平面是垂直的,参照图3中的子图a和子图c,子图a展示了人脸垂直的图像,子图c展示了人脸不垂直的图像),因此在应用训练后的模型执行推理任务时,输入至该训练后的模型也应当是人脸垂直的图像,这样才有利于提升模型预测的准确率。因此,当DMS摄像头布置到车内时,应当使DMS摄像头能够拍摄到人脸垂直的图像,这对DMS摄像头的安装位置产生了很大的限制。此外,由于不同款式的车辆,车辆的内部结构、线束、装饰等往往具有差异,导致DMS摄像头可以以A角度安装到第一车辆的第一位置上,但是却不能完全以A角度安装到第二车辆的第一位置上。继续参阅图1,图1中的子图a至子图c展示了几种安装于车内不同位置的DMS摄像头。具体的,DMS摄 像头以A角度安装到第一车辆的第一位置上,DMS摄像头可以获取人脸垂直的图像,但是DMS摄像头以A角度安装到第二车辆的第一位置上时,DMS摄像头将无法获取到人脸垂直的图像。所以,针对每一款不同的车型,都需要定制一款DMS摄像头。比如,原始摄像头为第一摄像头,针对于第一车辆的第一安装位置,要将第一摄像头的镜头组件、印制电路板(printed circuit board,PCB)板等硬件对应旋转A角度,才能使调整后的第一摄像头(以下称为A摄像头)部署于第一车辆的第一安装位置后,能够获取到人脸垂直的图像;针对于第二车辆的第二安装位置,要将第一摄像头的镜头组件、PCB板等硬件对应旋转B角度,才能使调整后的第一摄像头(以下称为B摄像头)部署于第二车辆的第二安装位置后,能够获取到人脸垂直的图像。由于A摄像头调整了硬件的角度,导致A摄像头往往只能部署于第一车辆的第一安装位置,在其他场合A摄像头可能完全无法使用。
为了解决上述问题,可以在车载摄像头本体上集成步进电机,以实现车载摄像头角度的自由转动。然而,这种方式虽然可以解决同一款车载摄像头可以部署于不同车辆不同安装位置的问题,但是这种方式将步进电机安装于车辆上,需要步进电机满足可靠性、抗高温、抗震动等要求,进一步增加了引入步进电机的成本。并且,由于将车载摄像头上集成了步进电机,导致车载摄像头的尺寸增大,不利于在车内寻找更多空间部署这种摄像头,因此这种方式也并不理想。
针对上述问题,本申请提供一种图像处理方法、以及应用了该方法的装置。其中,该装置可以是处理器、摄像模组、车载装置(比如:DMS摄像头、CMS摄像头以及行车记录仪)、智能车、云端设备或者服务器中的一个或者多个。应用了本申请提供的图像处理方法的装置,不需要增加额外的硬件成本,可以以通用的结构安装于不同位置,比如安装于车内的不同位置,而无需针对不同安装位置,定制不同形状的形状。有效节约装置的研发成本和研发周期,比如节约车载摄像头的研发成本和研发周期。以下结合具体的实施例进行说明。
参阅图4,为本申请实施例提供的一种图像处理方法的流程示意图。
如图4所示,本申请实施例提供的一种图像处理的方法包括如下步骤:
401、获取待处理图像。
其中,待处理图像是由视觉传感器采集得到的。待处理图像可以理解为是视觉传感器当前采集到的图像。
本申请涉及的视觉传感器是指由一个或者多个图形传感器组成的器件。本申请涉及的视觉传感器除了包括图形传感器,还可以配以光投射器及其他辅助设备。其中,图形传感器可以使用激光扫描器、摄像头等,本申请对图形传感器的具体类别并不进行限定。
在一个优选的实施方式中,视觉传感器可以是车载摄像头,关于车载摄像头已经在前文进行了介绍,这里不再重复介绍。视觉传感器是摄像头时,可以是任意一种类型的摄像头,比如可以是采用飞行时间(time of flight,TOF)技术的摄像头,采用位置传感器IF技术的摄像头等等。
在一种可能的实施方式中,待处理图像可以是任意一种格式的图像,比如待处理图像可以是RAW图像或者RGB图像。换句话说,待处理图像可以是原始图像,也可以是对原始图像进行了预处理后的图像,比如待处理图像可以是RAW格式的图像,还可以是对RAW格式的图像进行预处理后的图像,比如待处理图像可以是对RAW格式的图像进行自动对焦处 理、自动曝光处理或者自动白平衡处理后获取的图像。
402、根据视觉传感器的安装参数,校准待处理图像的角度。
其中,处理器和视觉传感器可以部署在同一个设备上,比如二者都部署在车载摄像头上;处理器和视觉传感器也可以部署在不同的设备上,比如视觉传感器部署在车载摄像头上,处理器部署在云端服务器上,或者,视觉传感器部署在车载摄像头上,处理器部署在车机或车载电脑上。本申请中涉及的视觉传感器的安装参数可以用于指示视觉传感器相对于世界坐标系的各个坐标轴的偏移角度。其中,世界坐标系可以采用多种不同的坐标系来实现,比如世界坐标系可以是空间直角坐标系(xyz坐标系),大地坐标系等等,本申请对此并不进行限定。为了便于描述,下面以空间坐标系(xyz坐标系)为例对视觉传感器的安装参数进行介绍。视觉传感器的安装参数包括偏航角(yaw)、俯仰角(pitch)以及翻滚角(roll),其中,yaw、pitch以及roll也可以统一称为欧拉角。yaw用于表示视觉传感器绕y轴旋转的角度,参阅图5,其中图5中的子图a表示初始状态,图5中的子图b为视觉传感器绕y轴旋转后的示意图,pitch用于视觉传感器表示绕x轴旋转的角度,继续参阅图5,其中图5中的子图a表示初始状态,图5中的子图d为视觉传感器绕x轴旋转后的示意图。roll用于表示视觉传感器绕z轴旋转的角度,继续参阅图5,其中图5中的子图a表示初始状态,图5中的子图c为视觉传感器绕z轴旋转后的示意图。
需要说明的是,视觉传感器的安装参数除了用于指示视觉传感器相对于世界坐标系的各个坐标轴的偏移角度,还可以用于指示视觉传感器的安装高度或者其他用于指示视觉传感器在空间中的姿态的参数。
本申请提供的方案可以通过多种方式获取视觉传感器的安装参数,以下结合几个具体的实施方式对如何获取视觉传感器的安装参数进行说明:
在一种可能的实施方式中,可以通过获取视觉传感器的外参以获取视觉传感器的安装参数。以车载摄像头为例,当车载摄像头系统上电运行时,可以加载包括车载摄像头外参的配置文件,进而使车载摄像头系统获取到车载摄像头的外参,车载摄像头的外参即为视觉传感器的安装参数。需要说明的是,在这种实施方式中,任意一种可以获取视觉传感器的外参的方法,本申请实施例均可以采用,比如可以通过棋盘格标定方法、通过在视觉传感器上装配重力传感器,以根据重力传感器获取视觉传感器的外参等等,本申请实施例对此并不进行限定。
在另一种可能的实施方式中,视觉传感器的安装参数是可以调整的,可以通过在线测量的方式更新视觉传感器的安装参数。汽车在出厂使用一段时间后,因为长时间高频振动、碰撞剐蹭等外界因素的影响,导致传感器之间空间位置相对关系发生偏移。换言之,随着车辆驾驶时间积累,因为路况、车辆震动等缘故,造成视觉传感器位置变化后,可以认为视觉传感器的初始的安装参数无法精确的指示视觉传感器在空间中的安装姿态。为了解决这一问题,在一种可能的实施方式中,可以采用在线标定的方式,对视觉传感器的安装参数进行更新。在这种实施方式中,任意一种可以获取自动获取视觉传感器的外参的方法,本申请实施例均可以采用,比如可以通过可以根据车道线等外界参照物来进行在线标定,或者利用相机运动的约束自动获取视觉传感器的外参。
在一种优选的实施方式中,如图6所示,还可以采用离线在线联合标定的方式。首先, 获取初始状态下视觉传感器的外参(欧拉角:pitch,yaw,roll);然后,获取到视觉传感器的外参发生变化时,获取变换后的外参与初始状态下的外参之间的偏差;最后,若偏差超过预设阈值,则根据变换后的视觉传感器的外参获取参考角度。换言之,将离线标定获取的视觉传感器的外参作为基准,每隔预设时长通过在线标定的方式获取视觉传感器的外参,若通过在线标定的方式获取的视觉传感器的外参和通过离线标定方式获取的视觉传感器的外参之间的偏差超过预设阈值,则对离线标定方式获取的视觉传感器的外参进行修正。在一种可能的实施方式中,可以直接利用在线标定的方式获取的视觉传感器的外参更新通过离线方式获取的视觉传感器的外参,更新后的外参作为视觉传感器最终的外参;在一种可能的实施方式中,可以对在线标定方式获取的视觉传感器的外参和通过离线方式获取的视觉传感器的外参进行加权处理,将加权处理后的结果作为视觉传感器最终的外参。
处理器校准待处理图像的角度,是指处理器对视觉传感器采集到的待处理图像进行旋转,以使待处理图像中包括的目标对象能够达到预设的校准标准,其中,预设的校准标准可以根据实际的应用场景进行设定。比如,在一个优选的实施方式中,可以将预设的校准标准设定为使得待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,该目标坐标轴是与水平面垂直的坐标轴。第一预设范围可以根据实际情况设定,比如可以将第一预设范围设置的足够小,继续参阅图3中的子图a,使待处理图像中的目标对象与目标坐标轴之间的夹角为零度,或者使待处理图像中的目标对象与目标坐标轴之间的夹角接近零度,以使待处理图像中的对象相对于水平面是接近垂直的,有利于提升后续利用待处理图像执行指定任务的模型的预测效果。再比如,继续参阅图3中的子图b,为了降低方案实现的难度,可以将第一预设范围设置的稍大一些,比如-7~+7°。需要说明的是,目标对象与目标坐标轴之间的夹角在第一预设范围内,并且目标对象是正立的。比如,目标对象是人脸时,目标对象与目标坐标轴之间的夹角在第一预设范围内时,目标对象是正的人脸,而不是倒着的人脸。换句话说,以xyz坐标系为例进行说明,水平面所在的坐标轴为x轴,目标坐标轴是y轴的一部分,目标坐标轴只用于分割第一象限和第二象限,而不用于分割第三象限和第四象限,可以参照图3中所示的目标坐标轴和水平面进行理解。可以通过多种方式获取目标对象与目标坐标轴之间的夹角,比如可以获取目标对象的中心点和坐标系原点之间的连线,根据该连线和目标坐标轴之间的夹角。此外,需要说明的是,本申请提供的方案为了便于描述,使待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,在一些可能的实施方式中,可以根据本申请描述的相同的原理,使待处理图像中的目标对象与水平面之间的夹角在第二预设范围内,或者使待处理图像中的目标对象与其他坐标轴之间的夹角在第三预设范围内,只要使校准后的待处理图像中的目标对象是正立的或者接近正立的即可。结合图7中的子图a和子图b进行说明,假设图7中的子图a为待处理图像,处理器对该待处理图像执行了步骤402后,可以获取到图7中的子图b。目标对象可以是任意一种类型的对象,比如对于DMS摄像头,其获取的图像,用于对驾驶员的驾驶行为及生理状态进行检测,则目标对象可以是人物,或者是人脸。
本申请提供的方案可以减少对视觉传感器安装位置的限制,视觉传感器的感知范围能够包括目标对象即可。在视觉传感器的感知范围能够包括目标对象的前提下,视觉传感器可以以任意角度安装。
在一个优选的实施方式中,安装参数包括参考角度,参考角度用于指示视觉传感器的安装面相对于水平面向第一方向偏转的角度,根据视觉传感器的安装参数,校准待处理图像的角度,包括:将待处理图像向第二方向旋转目标角度,目标角度和参考角度之间的偏差在预设范围内,第二方向和第一方向相反。处理器将待处理图像向第二方向旋转目标角度,以使待处理图像中的目标对象相对于水平面是垂直的。在视觉传感器的感知范围能够包括目标对象的前提下,视觉传感器可以以任意角度安装,则可能会出现视觉传感器获取的图像是歪的,即视觉传感器获取的图像中目标对象相对于水平面不是垂直的,参照图7中的子图a所示进行理解。通过本申请提供的方案,获取视觉传感器的安装参数,比如,获取视觉传感器的安装面相对于水平面的偏转角,假设顺时针旋转了A角度,则对该视觉传感器获取的待处理图像逆时针旋转A角度,以将视觉传感器获取的图像转正,即让待处理图像中的对象相对于水平面是垂直的,参照图7中的子图b所示进行理解。
在一种可能的实施方式中,可以预先在电子设备中存储第一方向和目标角度,或者可以从云侧设备获取第一方向和目标角度,以使处理器从电子设备的存储器中获取第一方面和目标角度,将待处理图像向第一方向旋转目标角度,使待处理图像中的目标对象相对于水平面是垂直的。
在一种可能的实施方式中,如果该视觉传感器是车载摄像头,比如该电子设备是DMS摄像头,Yaw,pitch以及roll都为0°的时候,视觉传感器拍摄的图像中的目标对象相对于地面是垂直的,因此当获取了视觉传感器的外参后,可以根据视觉传感器的外参对视觉传感器拍摄的图像进行旋转,使视觉传感器拍摄的图像中的目标对象相对于地面是垂直的。则该DMS摄像头在部署阶段,可以使DMS摄像头的yaw和pitch都为0°或者接近0°。则只需要根据该DMS摄像头的roll对该DMS摄像头获取的图像进行调整。在一种可能的实施方式中,
同时,要保证DMS摄像头的感知范围包括驾驶员的人脸区域,比如,参照图8,在一个可能的实施方式中,在该DMS摄像的部署阶段,可以使DMS摄像头的光学中心指向驾驶员的鼻尖处(可以根据经验或者统计预估驾驶员的鼻尖处对应的位置)。在一个可能的实施方式中,可以参照公式1-1理解根据roll对该DMS摄像头获取的图像进行调整。
Figure PCTCN2021091556-appb-000001
其中,(x,y)为旋转前图像中任意一点的坐标,(x’,y’)为旋转后该点对应的坐标,(x0,y0)为旋转中心,旋转中心可以理解为该图像的正中心对应的坐标点。摄像头的roll=θ。则图像需要逆时针旋转roll以保证人脸垂直。
参阅图9所示的流程图,针对DMS摄像头,DMS系统可以在启动(上电)后,自动加载包括DMS摄像头的外参的文件,其中求解外参的过程已经在上文进行了介绍,这里不再重复赘述。DMS摄像头采集了图像后,可以通过图像信号处理器(image signal processing,ISP)对DMS摄像头采集的图像进行预处理,预处理的过程也已经在上文进行了介绍,这里不再重复赘述。ISP还可以对图像进行旋转处理,以实现图像中的人脸相对于水平面是垂直的。在一个可能的实施方式中,ISP根据DMS摄像头外参中的roll对图像进行旋转处理,即逆时针旋转roll。需要说明的是,在一个可能的实施方式中,ISP可以部署在DMS摄像 头内,在另一个可能的实施方式中,ISP也可以部署于DMS摄像头,比如参照图10,可以将ISP部署在片上系统(system on chip,SOC)上。
本申请提供的方案的一种典型的应用场景为,DMS摄像头以通用的结构安装于不同车型的不同位置。举例说明,如图1中的子图a至子图c中的摄像头可以是同一款摄像头,不存在子图a中的摄像头只能适配子图a中的车型以及子图a中的安装位置。这是因为本申请提供的方案没有对摄像头的硬件进行旋转,而是通过将获取的图像旋转目标角度,使图像中的目标对象相对于水平面是垂直的,具体的目标角度可以是根据摄像头的roll确定的。
以上对本申请实施例提供的一种图像处理方法进行了介绍,通过本申请提供的一种图像处理方法,可以使不同的车型采用同一款摄像头,而无需针对不同安装位置,定制不同形状的摄像组件,有效节约摄像头的研发成本和研发周期。此外,需要说明的是,本申请提供的方案主要以车载摄像头为例进行的说明,但是本申请提供的方案对于任意一种摄像头都是适用的,本申请提供的方案可以降低对任意一种摄像头安装位置的限制。
可以理解的是,执行上述方法的设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
参见图11,为本申请实施例提供的一种电子设备的结构示意图。该电子设备包括处理器1001、存储器1002以及视觉传感器1004。可选地,还可以包括通信接口1003(本申请有时也将通信接口称为接口电路)。其中处理器1001包括但不限于中央处理器(central processing unit,CPU),网络处理器(network processor,NP),专用集成电路(application-specific integrated circuit,ASIC)或者可编程逻辑器件(programmable logic device,PLD)中的一个或多个。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。存储器1002可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically er服务器able programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器1002可以用于存储摄像头的外参。通信接口1003可以使用任何收发器一类的装置,用于与其他设备或通信网络通信,比如与云端设备进行通信。通信接口1003可以采用以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等技术与其他设备进行通信。
需要说明的是,上述处理器1001、存储器1002、通信接口1003以及视觉传感器1004共同实现上述图4对应的实施例描述的方法。比如,视觉传感器1004用于执行图4对应的实施例中的步骤401的相关内容,存储器1002用于存储指令,以使处理器1001执行该指令时,执行图4对应的实施例中的步骤402的相关内容。或者通信接口1003用于接收指令,以使处理器1001执行该指令时,执行图4对应的实施例中的步骤402的相关内容。再或者,存储器1002用于存储镜头组件的外参,以使处理器1001从存储器1002中调取镜头组件的外参后,执行步骤402的相关内容。
在一种可能的实施方式中,图11中的电子设备可以是智能车。如图12所示,为本申请提供的一种智能车的架构示意图。
智能车100可包括各种子系统,例如行进系统102、传感器系统104、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,智能车100可包括更多或更少的子系统,并且每个子系统可包括多个部件。另外,智能车100的每个子系统和部件可以通过有线或者无线互连。
行进系统102可包括为智能车100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮121。
其中,引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为智能车100的其他系统提供能量。传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于智能车100周边的环境的信息的若干个传感器。例如,传感器系统104可包括全球定位系统122(定位系统可以是全球定位GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视智能车100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感数据可用于检测人物及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主智能车100的安全操作的关键功能。本申请提供的方案,传感器系统还包括车载摄像头,比如DMS摄像头,用于获取车内的环境,比如获取乘客或者驾驶员的人脸,对乘客和驾驶员的状态进行判断。
其中,定位系统122可用于估计智能车100的地理位置。IMU124用于基于惯性加速度来感知智能车100的位置和朝向变化。在一个实施例中,IMU124可以是加速度计和陀螺仪的组合。雷达126可利用无线电信号来感知智能车100的周边环境内的物体,具体可以表现为毫米波雷达或激光雷达。在一些实施例中,除了感知物体以外,雷达126还可用于感知物体的速度和/或前进方向。激光测距仪128可利用激光来感知智能车100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及 一个或多个检测器,以及其他系统组件。相机130可用于捕捉智能车100的周边环境的多个图像。相机130可以是静态相机或视频相机。
在一个可能的实施方式中,还可以包括控制系统106,控制系统106为控制智能车100及其组件的操作。控制系统106可包括各种部件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、线路控制系统142以及障碍避免系统144。
其中,转向系统132可操作来调整智能车100的前进方向。例如在一个实施例中可以为方向盘系统。油门134用于控制引擎118的操作速度并进而控制智能车100的速度。制动单元136用于控制智能车100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制智能车100的速度。计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别智能车100周边环境中的物体和/或特征。物体和/或特征可包括交通信号、道路边界和障碍体。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。线路控制系统142用于确定智能车100的行驶路线以及行驶速度。在本申请提供的方案中,计算机视觉系统140可以根据视觉传感器的外参对视觉传感器拍摄的图像进行旋转处理。
在一些实施例中,线路控制系统142可以包括横向规划模块1421和纵向规划模块1422,横向规划模块1421和纵向规划模块1422分别用于结合来自障碍避免系统144、GPS 122和一个或多个预定地图的数据为智能车100确定行驶路线和行驶速度。障碍避免系统144用于识别、评估和避免或者以其他方式越过智能车100的环境中的障碍体,前述障碍体具体可以表现为实际障碍体和可能与智能车100发生碰撞的虚拟移动体。在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
智能车100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。在一些实施例中,外围设备108为智能车100的用户提供与用户接口116交互的手段。例如,车载电脑148可向智能车100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于智能车100与位于车内的其它设备通信的手段。例如,麦克风150可从智能车100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向智能车100的用户输出音频(比如提示车外的用户,车辆即将进入执行状态)。无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如码分多址(code division multipleaccess,CDMA)、EVD0、全球移动通信系统(global system for mobile communications,GSM),通用分组无线服务技术(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE)或者5G蜂窝通信。无线通信系统146可利用无线局域网(wireless localarea network,WLAN)通信。在一些实施例中,无线通信系统146可利 用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向智能车100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为智能车100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
智能车100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如存储器114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制智能车100的个体组件或子系统的多个计算设备。处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。可选地,处理器113可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图12功能性地图示了处理器、存储器、和在相同块中的计算机系统112的其它部件,但是本领域的普通技术人员应该理解该处理器、或存储器实际上可以包括不存储在相同的物理外壳内的多个处理器、或存储器。例如,存储器114可以是硬盘驱动器或位于不同于计算机系统112的外壳内的其它存储介质。因此,对处理器113或存储器114的引用将被理解为包括可以并行操作或者可以不并行操作的处理器或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,处理器只执行与特定于组件的功能相关的计算。在此处所描述的各个方面中,处理器113可以位于远离智能车100并且与智能车100进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于智能车100内的处理器113上执行而其它则由远程处理器113执行,包括采取执行单一操纵的必要步骤。在一些实施例中,存储器114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行智能车100的各种功能,包括以上描述的那些功能。存储器114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,存储器114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在智能车100在自主、半自主和/或手动模式中操作期间被智能车100和计算机系统112使用。用户接口116,用于向智能车100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制智能车100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向系统132来避免由传感器系统104和障碍避免系统144检测到的障碍体。在一些实施例中,计算机系统112可操作来对智能车100 及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与智能车100分开安装或关联。例如,存储器114可以部分或完全地与智能车100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图12不应理解为对本申请实施例的限制。
应当理解,上述仅为本申请实施例提供的一个例子,并且,电子设备可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。
可以将图11中的视觉传感器1004看做获取模块,可以将处理器1001看做处理模块,将通信接口1003看做通信模块,将存储器1002看做存储模块。需要说明的是,本申请对模块的具体名称并不进行限定,比如也可以将处理模块称为校准模块。本申请还提供一种电子设备,参照图13,包括获取模块1201,存储模块1202以及处理模块1203。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
本申请实施例提供还提供一种芯片,该芯片包括:处理单元,所述处理单元例如可以是处理器。该处理单元可执行存储单元存储的计算机执行指令,以使芯片执行上述图4所描述的方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。具体地,前述的处理单元或者处理器可以是中央处理器(central processing unit,CPU)、网络处理器(neural-network processing unit,NPU)、图形处理器(graphics processing unit,GPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)或现场可编程逻辑门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者也可以是任何常规的处理器等。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献 的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于训练模型的程序,当其在计算机上运行时,使得计算机执行上述图3中所描述的方法。
本申请实施例还提供一种数字处理芯片。该数字处理芯片中集成了用于实现上述处理器,或者处理器的功能的电路和一个或者多个接口。当该数字处理芯片中集成了存储器时,该数字处理芯片可以完成前述实施例中的任一个或多个实施例的方法步骤。当该数字处理芯片中未集成存储器时,可以通过通信接口与外置的存储器连接。该数字处理芯片根据外置的存储器中存储的程序代码来实现上述实施例中电子设备或者摄像头执行的动作。
本申请实施例中还提供一种计算机程序产品,所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
本申请的说明书和权利要求书及上述附图中的术语“第一”,“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。本申请中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程,方法,系统,产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程,方法,产品或设备固有的其它步骤或模块。在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时 间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。本申请中所出现的模块的划分,是一种逻辑上的划分,实际应用中实现时可以有另外的划分方式,例如多个模块可以结合成或集成在另一个系统中,或一些特征可以忽略,或不执行,另外,所显示的或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些端口,模块之间的间接耦合或通信连接可以是电性或其他类似的形式,本申请中均不作限定。并且,作为分离部件说明的模块或子模块可以是也可以不是物理上的分离,可以是也可以不是物理模块,或者可以分布到多个电路模块中,可以根据实际的需要选择其中的部分或全部模块来实现本申请方案的目的。

Claims (22)

  1. 一种图像处理的方法,其特征在于,包括:
    获取待处理图像,其中,所述待处理图像是由视觉传感器采集得到的;
    根据所述视觉传感器的安装参数,校准所述待处理图像的角度。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述视觉传感器的安装参数,校准所述待处理图像的角度,包括:
    根据所述视觉传感器的安装参数,校准所述待处理图像的角度,以使得所述待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,所述目标坐标轴是与水平面垂直的坐标轴。
  3. 根据权利要求2所述的方法,其特征在于,所述安装参数包括参考角度,所述参考角度用于指示所述视觉传感器的安装面相对于所述水平面向第一方向偏转的角度,所述根据所述视觉传感器的安装参数,校准所述待处理图像的角度,包括:
    将所述待处理图像向第二方向旋转目标角度,所述目标角度和所述参考角度之间的偏差在预设范围内,所述第二方向和所述第一方向相反。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,在所述视觉传感器的当前安装参数与初始安装参数之间的偏差超过预设阈值时,所述参考角度更新为由所述当前安装参数确定的参考角度。
  5. 根据权利要求1至3任一项所述的方法,其特征在于,在所述视觉传感器的当前安装参数与初始安装参数的偏差超过预设阈值时,所述参考角度更新为由所述当前安装参数和所述初始安装参数确定的参考角度。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,校准后的所述待处理图像用于识别所述待处理器图像中是否包括人脸。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述视觉传感器部署在车辆的目标区域上,所述目标区域包括A柱、内后视镜、转向柱、仪表盘、中控台中的至少一种。
  8. 一种电子设备,其特征在于,所述电子设备包括获取模块和校准模块,
    所述获取模块,用于获取待处理图像,其中,所述待处理图像由视觉传感器采集得到;
    所述校准模块,用于根据所述视觉传感器的安装参数,校准所述获取模块获取的所述待处理图像的角度。
  9. 根据权利要求8所述的电子设备,其特征在于,所述校准模块,具体用于:
    根据所述视觉传感器的安装参数,校准所述待处理图像的角度,以使得所述待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,所述目标坐标轴是与水平面垂直的坐标轴。
  10. 根据权利要求9所述的电子设备,其特征在于,所述安装参数包括参考角度,所述参考角度用于指示所述视觉传感器的安装面相对于水平面向第一方向偏转的角度,所述校准模块,具体用于:
    将所述待处理图像向第二方向旋转目标角度,所述目标角度和所述参考角度之间的偏差在预设范围内,所述第二方向和所述第一方向相反。
  11. 根据权利要求8至10任一项所述的电子设备,其特征在于,在所述视觉传感器的当前安装参数与初始安装参数之间的偏差超过预设阈值时,所述参考角度更新为由所述当前安装参数确定的参考角度。
  12. 根据权利要求8至10任一项所述的电子设备,其特征在于,在所述视觉传感器的当前安装参数与初始安装参数的偏差超过预设阈值时,所述参考角度更新为由所述当前安装参数和所述初始安装参数确定的参考角度。
  13. 根据权利要求8至12任一项所述的电子设备,其特征在于,校准后的所述待处理图像用于识别所述待处理器图像中是否包括人脸。
  14. 根据权利要求8至13任一项所述的电子设备,其特征在于,所述视觉传感器部署在车辆的目标区域上,所述目标区域包括A柱、内后视镜、转向柱、仪表盘、中控台中的至少一种。
  15. 根据权利要求8至14任一项所述的电子设备,其特征在于,所述电子设备是处理器、摄像模组、车载装置、智能车、云端设备或者服务器中的一个或者多个。
  16. 一种电子设备,其特征在于,所述电子设备包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1至7中任一项所述的方法。
  17. 一种电子设备,其特征在于,所述电子设备包括接口电路,所述接口电路和处理器耦合,所述接口电路用于接收程序指令,以使所述处理器执行所述接口电路接收的所述程序指令时实现权利要求1至7中任一项所述的方法。
  18. 一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1至7中任一项所述的方法。
  19. 一种芯片,其特征在于,所述芯片与存储器耦合,用于执行所述存储器中存储的程序,以执行如权利要求1至7任一项所述的方法。
  20. 一种摄像组件,其特征在于,所述摄像组件包括视觉传感器和处理器,
    所述视觉传感器,用于获取待处理图像,其中,所述待处理图像由视觉传感器采集得到;
    所述处理器,用于根据所述视觉传感器的安装参数,校准所述待处理图像的角度。
  21. 根据权利要求20所述的摄像组件,其特征在于,所述处理器,具体用于:
    根据所述视觉传感器的安装参数,校准所述待处理图像的角度,以使得所述待处理图像中的目标对象与目标坐标轴之间的夹角在第一预设范围内,所述目标坐标轴是与水平面垂直的坐标轴。
  22. 根据权利要求21所述的摄像组件,其特征在于,所述安装参数包括参考角度,所述参考角度用于指示所述视觉传感器的安装面相对于水平面向第一方向偏转的角度,所述处理器,具体用于:
    将所述待处理图像向第二方向旋转目标角度,所述目标角度和所述参考角度之间的偏差在预设范围内,所述第二方向和所述第一方向相反。
PCT/CN2021/091556 2021-04-30 2021-04-30 一种图像处理方法以及装置 WO2022227020A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/091556 WO2022227020A1 (zh) 2021-04-30 2021-04-30 一种图像处理方法以及装置
CN202180001504.3A CN113348464A (zh) 2021-04-30 2021-04-30 一种图像处理方法以及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/091556 WO2022227020A1 (zh) 2021-04-30 2021-04-30 一种图像处理方法以及装置

Publications (1)

Publication Number Publication Date
WO2022227020A1 true WO2022227020A1 (zh) 2022-11-03

Family

ID=77481194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091556 WO2022227020A1 (zh) 2021-04-30 2021-04-30 一种图像处理方法以及装置

Country Status (2)

Country Link
CN (1) CN113348464A (zh)
WO (1) WO2022227020A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685366A (zh) * 2012-04-01 2012-09-19 深圳市锐明视讯技术有限公司 一种图像自动校正方法及系统和监控设备
CN103079902A (zh) * 2010-09-06 2013-05-01 爱信精机株式会社 驾驶支援装置
CN107682618A (zh) * 2016-08-02 2018-02-09 昆山研达电脑科技有限公司 图像自动校正系统及其方法
CN110460769A (zh) * 2019-07-05 2019-11-15 浙江大华技术股份有限公司 图像矫正方法、装置、计算机设备和存储介质
CN111717768A (zh) * 2019-03-20 2020-09-29 东芝电梯株式会社 图像处理装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111457886B (zh) * 2020-04-01 2022-06-21 北京迈格威科技有限公司 距离确定方法、装置及系统
CN111696160B (zh) * 2020-06-22 2023-08-18 江苏中天安驰科技有限公司 车载摄像头自动标定方法、设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079902A (zh) * 2010-09-06 2013-05-01 爱信精机株式会社 驾驶支援装置
CN102685366A (zh) * 2012-04-01 2012-09-19 深圳市锐明视讯技术有限公司 一种图像自动校正方法及系统和监控设备
CN107682618A (zh) * 2016-08-02 2018-02-09 昆山研达电脑科技有限公司 图像自动校正系统及其方法
CN111717768A (zh) * 2019-03-20 2020-09-29 东芝电梯株式会社 图像处理装置
CN110460769A (zh) * 2019-07-05 2019-11-15 浙江大华技术股份有限公司 图像矫正方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN113348464A (zh) 2021-09-03

Similar Documents

Publication Publication Date Title
US11915492B2 (en) Traffic light recognition method and apparatus
CN107816976B (zh) 一种接近物体的位置确定方法和装置
US10845796B2 (en) Electronic control units, vehicles, and methods for switching vehicle control from an autonomous driving mode
CN111386217B (zh) 用于在可移动物体的手动控制与自主控制之间进行切换的技术
CN107891809B (zh) 自动驻车辅助装置、提供自动驻车功能的方法以及车辆
CN107776574B (zh) 一种自动驾驶车辆的驾驶模式切换方法和装置
WO2021184218A1 (zh) 一种相对位姿标定方法及相关装置
CN113518956B (zh) 用于可移动对象的自主控制和人工控制之间的切换的方法、系统及存储介质
WO2022000448A1 (zh) 车内隔空手势的交互方法、电子装置及系统
WO2022204855A1 (zh) 一种图像处理方法及相关终端装置
US20230077837A1 (en) Collaborative perception for autonomous vehicles
US11812197B2 (en) Information processing device, information processing method, and moving body
WO2020150237A1 (en) Weighted normalized automatic white balancing
CN115220449B (zh) 路径规划的方法、装置、存储介质、芯片及车辆
WO2021217575A1 (zh) 用户感兴趣对象的识别方法以及识别装置
CN111971635A (zh) 移动体、信息处理装置、信息处理方法和程序
CN115348657A (zh) 用于车辆时间同步的系统架构、方法及车辆
WO2020116194A1 (ja) 情報処理装置、情報処理方法、プログラム、移動体制御装置、及び、移動体
KR102533246B1 (ko) 항법 장치 및 이를 포함하는 차량 운전 보조장치
WO2022227020A1 (zh) 一种图像处理方法以及装置
CN114937351B (zh) 车队控制方法、装置、存储介质、芯片、电子设备及车辆
CN115100630B (zh) 障碍物检测方法、装置、车辆、介质及芯片
US20210357667A1 (en) Methods and Systems for Measuring and Mapping Traffic Signals
CN114092898A (zh) 目标物的感知方法及装置
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21938494

Country of ref document: EP

Kind code of ref document: A1