CN113212305A - Use method of roof camera system applied to self-driving tourism - Google Patents

Use method of roof camera system applied to self-driving tourism Download PDF

Info

Publication number
CN113212305A
CN113212305A CN202010071546.1A CN202010071546A CN113212305A CN 113212305 A CN113212305 A CN 113212305A CN 202010071546 A CN202010071546 A CN 202010071546A CN 113212305 A CN113212305 A CN 113212305A
Authority
CN
China
Prior art keywords
camera
image
roof
mode
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010071546.1A
Other languages
Chinese (zh)
Other versions
CN113212305B (en
Inventor
肖文平
何敖东
张航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hinge Electronic Technologies Co Ltd
Original Assignee
Shanghai Hinge Electronic Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hinge Electronic Technologies Co Ltd filed Critical Shanghai Hinge Electronic Technologies Co Ltd
Priority to CN202010071546.1A priority Critical patent/CN113212305B/en
Publication of CN113212305A publication Critical patent/CN113212305A/en
Application granted granted Critical
Publication of CN113212305B publication Critical patent/CN113212305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/004Arrangements for holding or mounting articles, not otherwise provided for characterised by position outside the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0042Arrangements for holding or mounting articles, not otherwise provided for characterised by mounting means
    • B60R2011/008Adjustable or movable supports
    • B60R2011/0085Adjustable or movable supports with adjustment by rotation in their operational position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a using method of a car roof camera system applied to self-driving tourism, which comprises the following steps: the method comprises the steps that a user presets an interval for starting a roof camera to shoot and a shooting mode according to a self-driving tour route, and when a vehicle runs to the preset shooting interval, a vehicle-mounted central control automatically starts the camera to shoot according to the preset shooting mode; when the position positioned by the positioning module is matched with the preset shooting interval position, the vehicle-mounted central control calling control module starts the roof camera to automatically shoot according to the preset shooting mode. According to the invention, a plurality of roof cameras shoot at different angles according to the arrangement and then splice images to form 360-degree all-round images.

Description

Use method of roof camera system applied to self-driving tourism
Technical Field
The invention relates to the field of automobiles, in particular to a using method of a roof camera system applied to self-driving tourism.
Background
Along with the popularization of automobiles, more and more automobiles enter every family, the living consumption level of people is continuously improved, the number of automobiles is also continuously increased, and the number of people going out for traveling is also continuously increased. At present, people mainly travel with a team or a self-driving tour outside. The tour guide device has more defects of group tour, is limited by group activities, and is different in favor of scenery, residence time and scenery content of each person, if some people like watching mountains and waters and some people like watching animals, the tour guide is difficult to coordinate, so that the tour guide is not happy. In addition, group play is also time-limited. Therefore, more and more young people like self-driving tour, the self-driving tour mode is flexible, and the scenic spots and time for playing are not limited and can be set by themselves. In the driving tour, the beautiful scenery on both sides of the road can be appreciated. In order to record the beautiful scenery along the way, people on the vehicle can take pictures through a mobile phone, a camera and the like, but the mobile phone, the camera and the like are inconvenient to take pictures, the vehicle is shaken in the driving process, and people take the mobile phone or the camera to take pictures for a long time, so that the arms are numb, the picture taking effect is poor, and the mobile phone and the camera are easy to jolt on the road surface with uneven road surfaces.
To address the issue of landscape documentation, research is being conducted by an increasing number of research institutions and businesses. In the prior art, chinese patent publication No. CN 105034953 a discloses a panoramic traveling sightseeing recorder, which has four cameras mounted on the top of a vehicle, and the four cameras collect panoramic information in real time, but because the cameras are not rotatable, the collection range and angle are limited, and the panoramic information cannot be collected in the direction defined by the user, sometimes the beautiful scenery of the road is right ahead of the vehicle, and there is no scenery in other directions, if panoramic shooting is adopted, the appearance of a picture without beautiful scenery is inevitable, and the observation angle of the cameras cannot be adjusted. Chinese utility model patent publication No. CN 205890755U discloses a roof camera, and it includes base, roof-rack, cloud platform, link mechanism, camera and first motor, and first motor is installed on the base horizontally and the output shaft of first motor is connected with link mechanism, and the base passes through link mechanism with the roof-rack and articulates, and the cloud platform is installed on the roof-rack, and the camera is installed at cloud platform front end. The camera can rotate 360 degrees, but can only collect scenes in a certain specific direction, and cannot collect panoramic information.
Disclosure of Invention
Based on the defects in the prior art, the invention provides a using method of a car roof camera system applied to self-driving tourism, which comprises the following steps: step S1, presetting a shooting interval and a shooting mode for starting the roof camera for shooting according to the self-driving tour route by the user;
step S2, when the vehicle runs to a preset shooting interval, the vehicle-mounted central control automatically starts a camera to shoot according to a preset shooting mode;
in step S1, the preset shooting section is implemented by monitoring data of the positioning module in real time through the vehicle-mounted central control unit, and when the position of the positioning module matches the position of the preset shooting section, the vehicle-mounted central control unit calls the control module to start the roof camera to automatically shoot according to the preset shooting mode.
A using method of a car roof camera system applied to self-driving tourism is further characterized in that before a preset shooting interval is reached, a car roof camera is started in advance to be debugged;
the pre-starting comprises 1-5 km or 1-10 min ahead of the starting point of a preset shooting interval.
The using method of the roof camera system applied to self-driving tourism is further characterized in that one or more roof cameras are included, operation of the roof cameras is remotely called through a control module of a vehicle-mounted central control, and operation icons corresponding to the control module are displayed on a display interface of a touch display screen.
The use method of the car roof camera system applied to self-driving tourism is further characterized in that a display interface of the touch display screen comprises the following steps: the method comprises the steps of presetting a function interface, a roof camera control interface and a height adjusting interface, wherein the preset function interface comprises a return main interface icon, a panoramic icon, a preset shooting mode icon and a non-panoramic icon, and when a user clicks the panoramic icon, a plurality of roof cameras shoot panoramic images according to a preset panoramic mode; when the user clicks the non-panoramic icon, the touch display screen displays a plurality of non-panoramic shooting modes for the user to select.
The use method of the roof camera system applied to the self-driving tour is further characterized in that the preset shooting mode comprises the following steps: mode 1: shooting left by a plurality of cameras; mode 2: shooting to the right by a plurality of cameras; mode 3: a plurality of cameras shoot forward; mode 4: a plurality of cameras shoot backwards; mode 5: a plurality of camera heads shoot rightwards and leftwards; mode 6: the cameras shoot forwards and backwards; mode 7: carrying out panoramic shooting; mode 8: in the user-defined mode, a user sets a shooting angle and a scene according to requirements; the user can modify, add or delete the shooting mode according to the requirement.
The using method of the roof camera system applied to the self-driving tour is characterized in that a control interface of the roof camera comprises the position and the rotation angle of the roof camera, and the rotation angle of the camera can be adjusted by touching the corresponding roof camera, so that the roof camera can observe a preset direction;
the height adjusting interface is used for adjusting the ascending or descending height of the roof camera, and the roof camera can be adjusted to ascend or descend by touching the height adjusting interface.
A using method of a roof camera system applied to self-driving tourism is further characterized in that a preset shooting mode comprises a panoramic shooting mode, a panoramic image acquired in the panoramic shooting mode needs to be acquired after image splicing, before the image splicing is carried out, the roof camera needs to be calibrated to acquire internal parameters and external parameters, and the acquisition of the external parameters comprises a first acquisition mode, a second acquisition mode or a third acquisition mode;
the first acquisition mode includes: calculating external parameters corresponding to the preset height of the roof camera within the adjustable height range by adjusting the height of the roof camera, and storing the external parameters in an external parameter mapping table; after panoramic photography is carried out, calling an external parameter mapping table to find an external parameter corresponding to the height of the current roof camera;
the second acquisition mode includes:
step Sa, during panoramic shooting, fixing the rotation angle position of each roof camera; and then adjusting the roof camera to a preset height, and when a coordinate system is established, fixing the camera coordinate system in the Z-axis direction to change, wherein the relation of the roof camera in a pixel coordinate system and a world coordinate system is as follows:
Figure BDA0002377434130000041
Figure BDA0002377434130000042
in formula (1), two-dimensional coordinates of the image are represented, fuAnd fvDenotes the focal length of the camera on the horizontal axis and the vertical axis of the image, respectively, (u) based on the pixel0,(u,v)v0) Representing the coordinates of the center point of the image, fu, fv representing the focal length of the camera on a pixel basis on the horizontal axis and the vertical axis of the image, and X, Y, Z representing the points of the spatial points in the world coordinate system; x, y and z represent coordinates of the space point in a camera coordinate system, wherein (R and T) are external parameters, R is a rotation matrix, and T is a translation matrix;
step Sb, adjusting the height h of the roof camerasAnd placing a calibration plate around the vehicle body, obtaining known points (u, v) corresponding to pixel coordinates of points (X, Y, Z) under a plurality of world coordinate systems in corresponding roof cameras, calculating external parameter matrixes Rs and Ts under a preset height by using a formula (1), and solving points (X) corresponding to the points (X, Y, Z) under the known coordinate systems in a camera coordinate system by using a formula (2)s,ys,zs);
Step Sc, when the roof camera is adjusted to ascend or descend, according to the geometrical change relation of rotation and translation of the coordinates, when the height of the roof camera is h, the (x) of the space point in the camera coordinate systemh,yh,zh) H compared with the height of the roof camerasWhen it is xh=xs,yh=ysRemains unchanged, zhAnd (4) changing, calculating a formula:
Figure BDA0002377434130000043
k is a proportionality coefficient, e is a constant, and k and e can be calibrated and solved by presetting different camera heights;
step Sd, height of the camera on the roof is h due to the spacesCoordinate of time (x)s,ys,zs) The coordinate point of (2) is known, and the coordinate (x) of the space point at the camera when the height of the roof camera is h is solved by formula (3)h,yh,zh) A point formed by combining a plurality of coordinate points (x) having a height hh,yh,zh) Substituting into formula (2), the coordinates (X, Y, Z) of the space point in the world coordinate system are known, and calculating the external parameters (R) of the camera by using formula (2)h,Th);
A third acquisition mode; by adjusting the height h of the roof camera at different preset heightsiRepeating the steps Sa to Sd in the second acquisition mode to acquire a plurality of rotation matrices
Figure BDA0002377434130000051
And translation matrix
Figure BDA0002377434130000052
Then, a rotation matrix R under the current height h is calculatedhAnd translation matrix ThThe calculation formula is as follows:
Figure BDA0002377434130000053
Figure BDA0002377434130000054
in the formula, n represents the number of times of calibration, hi represents the ith of the roof cameraHeight at the time of secondary calibration;
Figure BDA0002377434130000055
and respectively representing the rotation matrix and the ith translation matrix in the ith external parameter matrix corresponding to the situation that the height is h is calculated by adopting the external parameter matrix corresponding to the height hi.
A use method of a car roof camera system applied to self-driving tourism further comprises the following steps: step S3, the user acquires the picture through the intelligent terminal and shares the picture with a preset authority member for interaction;
step S3 specifically includes: the method comprises the steps that a user obtains a shot picture or video, then selects the video or picture needing to be shared and sends the video or picture to a cloud sharing server, and the cloud sharing server sends the received video or picture to an intelligent terminal of a preset authority member; the intelligent terminals of members with preset authority receive the pictures or videos sent by the users and then check the pictures or videos, and the pictures or videos are operated in the checking process, wherein the operation comprises comment writing, praise, mark adding and doodling;
the cloud sharing server acquires member operation records of preset authority in real time, finds out a plurality of pictures with the top rank according to the times of the preset authority members for viewing the pictures or videos or the number of comment or praise times, and acquires similar pictures through a built-in algorithm and recommends the pictures to the members with the preset authority for viewing.
The application method of the roof camera system applied to self-driving tourism is characterized in that images or videos shot by the roof camera are stored in a local storage space or temporarily stored in the local storage space and then uploaded to a cloud sharing server, wherein the local storage space at least comprises a first partition and a second partition; the first partition is used for circularly storing data, and when the first partition is full of storage, the new data is overwritten to the old data according to the first-in first-out sequence;
the second partition is used for storing the link of the image or the video in the cloud sharing server or the first partition, and data in the second partition is not automatically deleted;
the images or videos are displayed in a grouping mode, are automatically matched or customized by a user according to the selection of the user, and are displayed in a grouping mode according to the time, the street name, the scenic spot name or the name of a car roof camera or the name of a route; the grouping display method is characterized in that a label is added in the picture attribute, then the picture attribute is classified through the label, and the total number of grouped pictures is displayed.
A using method of a car roof camera system applied to self-driving tourism is further characterized in that a self-driving tourism module is arranged in an intelligent terminal or a vehicle-mounted central control, and comprises a permission setting module, a photo selecting module and a checking module;
the permission setting module is configured to set a member list capable of viewing the photos and permissions of the members for copying, commenting and editing the photos; the photo selection module is configured to provide a user with a selection of photos or videos to be shared; the viewing module is used for providing the view for the member who has the photo or video shared by the viewing user.
The application method of the roof camera system applied to the self-driving tour is further characterized in that the roof camera comprises the following components: the system comprises a video acquisition module, an MCU (microprogrammed control Unit) microcontroller containing an MAC (media access control) module and a PHY (physical layer) module of 100base-T1, wherein the video acquisition module is connected with the microcontroller, and the PHY module is connected with the MAC module in the MCU through an RMII (remote management interface);
the video acquisition module is configured to be used for carrying out video image acquisition and compression processing operations, and compressed video image data is transmitted to the MCU module for processing; the video acquisition module comprises a DSP processing chip, and the video image comprises a bayer image.
A is applied to the application method of the car roof camera system that is self-driven and touring, further, the transmission mechanism of the data packet that the car roof camera produces adopts UDP/IP protocol cluster as the communication carrier, and join the message and confirm the mechanism while transmitting the important message; the SOME/IP protocol is used as a communication control protocol of the whole roof camera, a start/stop control function included in the roof camera is encapsulated into a SOME/IP service form, and remote calling is completed through vehicle-mounted central control;
the vehicle-mounted Ethernet-based AVB protocol cluster is adopted for transmitting the collected video or image of the vehicle-mounted camera, and comprises an IEEE 802.1AS precise time synchronization protocol, an IEEE 802.1Qav queue and forwarding protocol, an IEEE 802.1Qat stream reservation protocol and an IEEE 1722 audio and video bridging transmission protocol.
The application method of the roof camera system applied to self-driving tourism is further characterized in that after the roof camera collects images, the images are compressed and then sent to a vehicle-mounted central control unit for image decompression;
the compression processing of the image comprises encoding the acquired image, the image encoding step comprising: bayer image blocking, image intra-block prediction, image block outer prediction residual error and prediction mode entropy coding;
the Bayer image blocking comprises the steps of equally dividing an original image into image blocks with the same size;
the intra-picture prediction includes: selecting a plurality of preset modes for predicting the gray value of each small block after being partitioned according to the relation between adjacent values of the small blocks:
the method comprises the steps that a plurality of preset modes are adopted to sequentially obtain a predicted value of each small block, the difference value between the predicted value and a true value is obtained, the difference value is rounded downwards, then the sum of absolute values of the residuals is obtained, and the prediction mode with the minimum absolute value of the residuals is selected as a final coding mode;
prediction mode entropy coding: entropy coding the processed prediction residual error, and entropy coding the pixel residual error by using Huffman coding; and entropy coding the optimal prediction mode of each image block by exponential Golomb coding.
The decompression processing of the image includes: decoding of a compressed image, the decoding of the compressed image comprising: entropy decoding an input code stream to obtain image block prediction residuals and a prediction mode, performing inverse prediction on the prediction residuals of each image block by using the prediction mode, and reconstructing an obtained image pixel block to obtain a complete lossless original bayer image;
the entropy decoding includes: inputting the coding code stream of each image block, and performing Huffman decoding on the coding code stream to obtain a pixel prediction residual error of each block; and carrying out exponential Golomb decoding on the coded code stream formed by the prediction modes of each block to obtain the optimal prediction modes of different blocks.
Inverse prediction and image reconstruction: sequentially carrying out reverse prediction on all blocks by utilizing the optimal prediction mode of each block to obtain prediction pixel values of pixels at all positions of all image blocks, adding the prediction values and the decoded residual values, and subtracting 255 from the value with the added value being larger than 255 to finally obtain the gray value of the original image; and sequentially reconstructing each block to obtain an original bayer image.
Has the advantages that:
1. according to the technical scheme provided by the invention, the pictures in the self-driving tour process can be automatically shot and stored according to the setting of the user, and the pictures can be stored in groups according to a preset mode, so that the pictures can be conveniently checked and shared by the user. The pictures can be used for acquiring panoramic pictures and non-panoramic pictures.
2. When the roof cameras are positioned at different heights, when the panoramic images are spliced, the internal and external parameters of the roof cameras are obtained without calibrating by placing a calibration plate, and are directly obtained through a parameter mapping table or a built-in algorithm, so that the problem that shooting can be carried out only after the internal parameters and the external parameters are changed and need to be calibrated again due to the change of the positions of the roof cameras is solved.
3. The body storage is divided into two subareas, one is distributed for cyclic storage, and the other subarea only stores the link of the picture, so that on one hand, the use space is saved, the new picture can be stored in the local storage in time, the old picture is uploaded to the cloud server, and when a user needs to check, the user can directly download and check from the cloud through the picture link.
Drawings
The following drawings are only schematic illustrations and explanations of the present invention, and do not limit the scope of the present invention.
Fig. 1 is a schematic structural diagram of a roof camera system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of setting and displaying interfaces of a roof camera displayed by an entertainment display screen in an embodiment of the invention, fig. 2a shows a selection interface for selecting to view the roof camera, fig. 2b shows a selection interface for selecting a picture of a shooting position required to be viewed, fig. 2c shows a picture shot by three roof cameras for displaying and viewing, and fig. 2d shows a spliced panoramic image shot by the roof camera.
Fig. 3 is a schematic diagram of an intelligent terminal including a self-driving tour module according to an embodiment of the present invention.
FIG. 4 is a flowchart of a photo process for a user sharing a self-driving tour with friends in an embodiment of the invention.
Fig. 5 is a schematic structural diagram of a roof camera in an embodiment of the present invention.
Fig. 6 is an example of pixel matrix indices after blocking in an original picture in an embodiment of the present invention.
Fig. 7 is a schematic view of an operation interface of a control module for controlling a top camera according to an embodiment of the present invention, where fig. 7a is a single top camera display operation interface, and fig. 7b is a top camera panoramic display interface.
Fig. 8 is a schematic diagram of a shooting and displaying area of a roof camera in the self-driving tourism process in an embodiment of the invention.
FIG. 9 is a schematic diagram of a picture or video display interface for a user viewing a self-driving tour taken during an embodiment of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects and effects herein, embodiments of the present invention will now be described with reference to fig. 1 to 9, in which like reference numerals refer to like parts throughout. For the sake of simplicity, the drawings are schematic representations of relevant parts of the invention and are not intended to represent actual structures as products. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled.
As for the control system, the functional module, application program (APP), is well known to those skilled in the art, and may take any suitable form, either hardware or software, and may be a plurality of functional modules discretely arranged or a plurality of functional units integrated into one piece of hardware. In its simplest form, the control system may be a controller, such as a combinational logic controller, a microprogram controller, or the like, so long as the operations described herein are enabled. Of course, the control system may also be integrated as a different module into one physical device without departing from the basic principle and scope of the invention.
The term "connected" in the present invention may include direct connection, indirect connection, communication connection, and electrical connection, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, values, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, values, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items
It should be understood that the term "vehicle" or "vehicular" or other similar terms as used herein generally includes motor vehicles, such as passenger automobiles including Sport Utility Vehicles (SUVs), buses, trucks, various commercial vehicles, watercraft including a variety of boats, ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles, and other alternative fuel vehicles (e.g., fuels derived from non-petroleum sources). As referred to herein, a hybrid vehicle is a vehicle having two or more power sources, such as both gasoline-powered and electric-powered vehicles.
Further, the controller of the present disclosure may be embodied as a non-transitory computer readable medium on a computer readable medium containing executable program instructions executed by a processor, controller, or the like. Examples of computer readable media include, but are not limited to, ROM, RAM, Compact Disc (CD) -ROM, magnetic tape, floppy disk, flash drive, smart card, and optical data storage device. The computer readable recording medium CAN also be distributed over network coupled computer systems so that the computer readable medium is stored and executed in a distributed fashion, such as by a telematics server or Controller Area Network (CAN).
Example 1
The implementation provides a roof camera system applied to self-driving tourism, and referring to fig. 1, the roof camera system comprises one or more roof cameras, a vehicle-mounted central controller and a touch display screen, wherein the vehicle-mounted central controller is respectively connected with a roof camera device and the touch display screen, and displays on the touch display screen after acquiring image data and sending the image data to the vehicle-mounted central controller for processing;
the implementation provides a vehicle roof camera system applied to self-driving tourism, and the vehicle roof camera system further comprises a T-box (telematics box), a cloud sharing server, a positioning module and an intelligent terminal, wherein a vehicle-mounted central control is respectively connected with the positioning module and the T-box, the T-box is wirelessly connected with the cloud sharing server, the T-box is wirelessly connected with the intelligent terminal, and the wireless connection mode comprises one of wifi, Bluetooth, 3G, 4G and 5G communication; the vehicle top camera and the T-box are respectively connected with a vehicle-mounted central control through a vehicle-mounted Ethernet network;
the T-box is arranged on the roof, the roof camera is arranged on the T-box, and the rotation, the ascending and the descending of the roof camera are operated in real time through the touch display screen;
the vehicle-mounted central control processes the acquired picture and then transmits the processed picture to the cloud sharing server or the intelligent terminal through the T-box, or the cloud sharing server transmits the picture data to the intelligent terminal, or the intelligent terminals can directly share the picture;
the embodiment provides a car roof camera system applied to self-driving tourism, and the car roof camera system further comprises a positioning module and an entertainment display screen, wherein the positioning module is used for navigation in the self-driving tourism process, and comprises a GPS positioning chip, a Beidou positioning chip or a Beidou and GPS double positioning chip;
the entertainment display screen receives the pictures transmitted from the vehicle-mounted central control, and a user selects to view the pictures according to the needs, wherein the viewing mode comprises the following steps: the screen of the entertainment display screen is divided into a plurality of areas, and each area specially displays the picture shot by the corresponding roof camera; or displaying the pictures shot by the roof camera selected by the user or displaying the panoramic pictures spliced and synthesized by the plurality of cameras by using the screen of the whole entertainment display screen.
The entertainment display screen is arranged on the back of a seat in front of the interior of the automobile, the roof of the automobile, an automobile door or other places, the entertainment display screen is used for providing the entertainment for passengers in the automobile to watch, the entertainment display screen receives landscape pictures transmitted by an automobile-mounted central control, a user can select to view the pictures as required, the viewing mode has multiple choices, the screen of the entertainment display screen can be divided into multiple regions, each region specially displays the pictures shot by the corresponding camera, the picture shot by one of the cameras can also be specially displayed by the whole entertainment display screen, and the panoramic pictures after the multiple cameras are spliced and synthesized can also be displayed.
Fig. 2a to 2d are schematic diagrams of display modes of an entertainment display screen in this embodiment, in fig. 2a, three roof cameras are illustrated as an example, icons of a1, a2, A3 and panorama are displayed in the diagram, a1, a2 and A3 respectively represent icons of a first roof camera, a second roof camera and a third roof camera, the three cameras are independent from each other and can rotate 360 degrees, and a user can view pictures of the corresponding cameras by clicking the corresponding icons, for example, the user clicks a1 to represent that a picture shot by the first camera needs to be viewed, and the user clicks the panorama picture to represent that the panorama picture needs to be viewed. The user can select one or more cameras according to needs, after selecting the cameras to be observed, the selected scenic spot pictures are looked over, fig. 2b shows the scene shot along the road through the roof camera, so that the user can conveniently look over the pictures, the pictures are displayed in groups according to the route of the self-driving tour, the grouped display modes can be various, automatic matching or user-defined according to the user selection can be realized, if the grouped display can be realized according to time, street names, scenic spot names, camera names, route names and the like, the grouped display method is realized by adding labels in picture attributes, then the pictures are classified through the labels, and the total number of the grouped pictures is displayed. Fig. 2c shows that the user has selected to view the pictures taken by the three cameras simultaneously, and the entertainment display screen is divided into three regions, including a first region showing a photograph taken by a roof camera a1, a second region showing a photograph taken by a roof camera a2, and a third region showing a photograph taken by a roof camera A3. When the user clicks on the panoramic image, the entertainment display screen displays the panoramic image in a full screen manner.
When the user views the photo, the user can manually turn pages, can set the time interval of the page turning, or can turn pages by voice or gestures.
Passengers or other self-driving tourists on the vehicle can check pictures or videos through the intelligent terminal, the mode of checking the videos through the intelligent terminal is the same as that of an entertainment display screen, a self-driving tourism module is arranged in the intelligent terminal and comprises a permission setting module, a picture selecting module and a checking module, the self-driving tourism module is installed in a vehicle-mounted central control or cloud sharing server and can be installed in the intelligent terminal or the vehicle-mounted central control or similar equipment in an application layer program mode, and the permission setting module is configured to be used for setting a member list capable of checking the pictures and permission of copying, commenting and editing the pictures by the members, and the permission setting module is shown in figure 3; the rights of the members can be the same or different, and can be set by the members; the photo selection module is configured to provide a user with a selection of photos or videos to be shared; the viewing module is used for providing members with photos or videos shared by viewing users for viewing;
referring to fig. 4, the method for sharing the self-driving tour photo with the friend by the user includes the following specific steps:
the method comprises the steps that a user obtains a shot picture or video, then selects the video or picture needing to be shared and sends the video or picture to a cloud server, and the cloud server shares the received video or picture to members with preset authority;
receiving the picture sent by the user by a member with preset authority, then viewing the picture, and operating the picture or the video in the viewing process, wherein the operation comprises comment writing, praise, editing and doodling;
and respectively finding out a plurality of pictures with the top ranking according to the number of times of viewing the pictures by the members with the preset authority or the number of the comment or the number of times of praise, and acquiring similar pictures through a built-in algorithm and recommending the pictures to the members with the preset authority for viewing. The built-in algorithm comprises the following steps: searching photos shot by the same roof camera in a similar time period, or searching photos of the same scene according to the recognition of the photos; or searching photos with the same geographic appearance according to the geographic position of the identified photos or searching photos with the same characteristics according to the characteristics of the identified photos.
Referring to fig. 5, the roof camera includes a video capture module, a Microcontroller (MCU) including an MAC module (data link layer), and a PHY module, the video capture module is connected to the microcontroller, and the PHY module is connected to the MAC module in the MCU through an RMII interface;
the video acquisition module is configured to be used for carrying out video image acquisition and compression processing operations, and compressed video image data is transmitted to the MCU module for processing;
it should be noted that since the image according to the present invention includes a video, the video is composed of a plurality of frames of images, and for the convenience of description, only a representation method such as an image is adopted, and the image actually includes a video.
Specifically, the video acquisition module includes a DSP processing chip, the acquired video or image includes a bayer image, and the image compression processing includes: an encoding process of the image;
the image coding process comprises Bayer image blocking, image intra-block prediction, image block out-of-prediction residual error and prediction mode entropy coding to obtain a coding code stream;
the Bayer image blocking comprises the steps of equally dividing an original image into image blocks with the same size; specifically, it can be divided into 16 × 16, 32 × 32, 64 × 64, 128 × 128, 512 × 512;
the intra-picture prediction includes: predicting the gray value of each small block after being partitioned, wherein the prediction comprises three prediction modes:
referring to fig. 6, fig. 6 is an example of pixel matrix subscripts after tiling;
prediction mode 1: for pixel Pi,jWhen the position coordinate j is less than or equal to 1, namely the pixel is positioned in the first two columns of the image block, the pixel value at the position is not predicted, and the pixel value is directly output; for pixel Pi,jWhen the position coordinate j is more than or equal to 2, namely the pixel is not positioned in the first two columns of the image block, the predicted value when the position pixel value is predicted is represented as Pi,j=Pi,j-2
Prediction mode 2: for pixel Pi,jWhen the position coordinate j is less than or equal to 1, namely the pixel is positioned in the first two columns of the image block, the pixel value at the position is not predicted, and the pixel value is directly output; for pixel Pi,jWhen the position coordinate i is less than or equal to 1 and j is 2 or 3, i.e. the pixel is located in the first two rows and the 3 rd or 4 th column of the image block, the prediction value when the position pixel value is predicted is represented as Pi,j=Pi,j-2(ii) a For pixel Pi,jWhen the position coordinate j is less than or equal to 1 and j is greater than or equal to 4, namely the pixel is positioned in the first two rows and not positioned in the first four columns of the image block, the predicted value when the position pixel value is predicted is represented as Pi,j=0.5*Pi-2,j+0.5*Pi,j-4; for pixel Pi,jWhen the position coordinate j is more than or equal to 2 and j is more than or equal to 2, namely the pixel is not positioned in the first two rows and not positioned in the first two columns of the image block, the predicted value when the position pixel value is predicted is represented as Pi,j= 0.5*Pi-2,j+0.5*Pi,j-2
Prediction mode 3: for pixel Pi,jWhen the position coordinate i is 0 and j is 0, namely the pixel is positioned at the first pixel at the upper left corner of the image block, the pixel value of the position is not predicted and is directly output; to the pixel Pi,jWhen the position coordinate i is 0 and j ≧ 1, which indicates that the pixel is located in the first row and not in the first column of the image block, the prediction value at the time of predicting the position pixel value is represented as Pi,j=Pi,j-1(ii) a For pixel Pi,jWhen the position coordinate i ≧ 1 and j ≧ 0, i.e., when the pixel is in the first column and not in the first row of the image block, the predicted value at the time of predicting the position pixel value is represented as Pi,j=Pi,-1j(ii) a For pixel Pi,jWhen the position coordinates i ≧ 1 and j ≧ 1, which indicate that the pixel is not located in the first row and not in the first column of the image block, the prediction value at the time of predicting the position pixel value is represented as Pi,j=0.5*Pi-1,j+0.5*Pi,j-1
The method comprises the steps of sequentially calculating a predicted value by using three preset image block prediction modes, calculating a difference value between the predicted value and a true value, rounding the difference value downwards, then obtaining the sum of absolute values of residuals, and selecting the prediction mode with the minimum absolute value of the residuals as a final coding mode;
prediction mode entropy coding: entropy coding the processed prediction residual error, and entropy coding the pixel residual error by using Huffman coding; entropy coding the optimal prediction mode of each image block by exponential Golomb coding;
the decoding process includes: entropy decoding an input code stream to obtain image block prediction residuals and a prediction mode, performing inverse prediction on the prediction residuals of each image block by using the prediction mode, and reconstructing an obtained image pixel block to obtain a complete lossless original bayer image;
the entropy decoding includes: inputting the coding code stream of each image block, and performing Huffman decoding on the coding code stream to obtain a pixel prediction residual error of each block; and carrying out exponential Golomb decoding on the coded code stream formed by the prediction modes of each block to obtain the optimal prediction modes of different blocks.
Inverse prediction and image reconstruction: sequentially carrying out reverse prediction on all blocks by utilizing the optimal prediction mode of each block to obtain prediction pixel values of pixels at all positions of all image blocks, adding the prediction values and the decoded residual values, and subtracting 255 from the value with the added value being larger than 255 to finally obtain the gray value of the original image; and sequentially reconstructing each block to obtain an original bayer image.
The microcontroller is configured to encapsulate video data according to a vehicle-mounted Ethernet protocol, then call a vehicle-mounted Ethernet sending mechanism to send the video data to the vehicle-mounted host for decoding and playing, and simultaneously receive a control message and a feedback message sent by the central control host;
the PHY module comprises a Broad R-Reach interface, the PHY module adopts a Broad R-Reach vehicle-mounted Ethernet physical layer technology conforming to the 100BASE-T1 standard, the data link layer MAC adopts a standard IEEE 802.3MAC layer protocol and is matched with an LWIP (Light Weight Internet protocol 1) embedded Ethernet protocol stack to realize accurate one-to-one or one-to-many multicast/broadcast communication in the in-vehicle local area network.
The SOME/IP protocol is used as a communication control protocol of the whole car roof camera system, the start/stop control function related in the car roof camera is encapsulated into a SOME/IP service form, and remote calling is finished through vehicle-mounted central control;
the vehicle-mounted Ethernet provides a high-quality and low-delay audio and video transmission solution by adopting an AVB protocol cluster based on the vehicle-mounted Ethernet for the transmission of the collected video or image of the vehicle-mounted camera, wherein the AVB protocol cluster comprises an IEEE 802.1AS precise time synchronization protocol, an IEEE 802.1Qav queue and forwarding protocol, an IEEE 802.1Qat stream reservation protocol and an IEEE 1722 audio and video bridging transmission protocol;
a transmission mechanism of a data packet generated by the roof camera adopts a UDP/IP protocol cluster as a communication carrier of the whole system or the roof camera, and a message confirmation mechanism is added when an important message is transmitted so as to prevent transmission loss.
Example 2:
the embodiment provides a using method of a car roof camera system applied to self-driving tourism, which comprises the following steps:
step S1, presetting a shooting interval and a shooting mode for starting the roof camera for shooting according to the self-driving tour route by the user;
step S2, when the vehicle runs to a preset shooting interval, the vehicle-mounted central control automatically starts a camera to shoot according to a preset shooting mode;
and step S3, sending the captured image to a preset user or a cloud server.
In step S1, the preset shooting section is implemented by monitoring data of the positioning module in real time through the vehicle-mounted central control, and when the position of the positioning module is matched with the preset position, the roof camera control module is called to start the roof camera to perform the preset shooting mode for automatic shooting;
the method comprises the steps that since a period of time is needed for starting and debugging of the roof camera, the roof camera is started in advance to shoot images or videos for debugging before reaching a preset position so as not to miss shooting positions set by a user, shooting is not carried out after debugging is finished, and shooting is carried out after reaching the preset position;
the method comprises the steps that the distance can be calculated in advance by 1-5 km from a preset shooting position or 1-10 min, and the distance is calculated according to the obtained travelling position and the preset position; if the vehicle speed is the current vehicle speed, the time can be calculated according to the preset position of the current distance.
In the step S1, the roof cameras include one or more, the operation of the roof cameras is remotely called through a control module in the vehicle-mounted central control, and operation icons corresponding to the control module are displayed on the touch display screen;
referring to fig. 7, after the icons of the control module are touched, the touch display screen displays a display interface corresponding to the corresponding icons, where the display interface includes: the method comprises the steps of presetting a function interface, a roof camera control interface and a height adjusting interface, wherein the preset function interface comprises a return main interface icon, a panoramic icon, a preset shooting mode icon and a non-panoramic icon, and when a user clicks the panoramic icon, a plurality of roof cameras can shoot panoramic images; when a user clicks the non-panoramic icon, the touch display screen displays the pictures shot by each camera in a display area in a split mode;
the preset photographing mode icon includes: the multiple shooting modes specifically include:
mode 1: shooting left by a plurality of cameras; mode 2: shooting to the right by a plurality of cameras; mode 3: a plurality of cameras shoot forward; mode 4: a plurality of cameras shoot backwards; mode 5: a plurality of cameras shoot leftwards and rightwards; mode 6: a plurality of cameras shoot forward and backward; self-defining a mode;
the user can modify, add or delete the shooting mode according to the requirement;
the control interface comprises the position and the rotation angle of the roof camera, and the rotation angle of the camera can be adjusted by touching the corresponding camera, so that the camera can observe the preset direction; mode 1: a plurality of cameras form a panoramic camera for shooting;
the height adjusting interface is used for adjusting the ascending or descending height of the roof camera, and the roof camera can be adjusted to ascend or descend by touching the height adjusting interface;
the roof camera needs to display a panoramic image, because the panoramic image is synthesized by a plurality of roof camera pictures, when the camera is synthesized, calibration is needed, internal parameters and external parameters of the camera are determined, image splicing can be carried out, but calibration also needs to be carried out by placing calibration plates around a vehicle, and the adjusted height is unknown in the driving process. Because the adjustment height L of the roof camera is divided into m equal parts, the minimum unit for setting the height adjustment is LkL/m, i.e. the height of the roof camera can only be L each time it is raised or loweredkInteger multiples of;
in order to obtain the internal parameters and the external parameters of the roof camera, the height of the roof camera is set to be n × Lk (n is more than or equal to 1 and less than or equal to m), calibration objects are placed around a vehicle body, the internal parameters and the external parameters of the roof camera are calculated through a calibration algorithm, and the height, the internal parameters and the external parameters of the roof camera are stored in an internal and external parameter mapping table;
when the panoramic image is synthesized, the height of the roof camera is calculated, and then the internal and external parameter mapping table is called to find the internal and external parameters corresponding to the height of the roof camera.
The calibration algorithm comprises Zhangzhen friend, DLT (direct linear transform method) and other methods;
in the other method, the roof camera does not need to be calibrated at different heights, only a relational expression of an external parameter matrix (R, T) under the current height of the roof camera and a relation (Rs, Ts) calibrated by a preset height is needed to be found, (wherein R is a rotation matrix, and T is a translation matrix), the external parameter under the current height is solved, the internal parameter of the camera is kept unchanged, only one-time solution calibration is needed, and in order to improve the precision and reduce the error, multiple times of calibration can be carried out to obtain the average value.
The method specifically comprises the following steps:
setting the external parameters of the roof camera as a matrix (R, T), the internal parameter matrix as K, and the current height of the roof camera as h;
step Sa, when panoramic shooting is carried out, the rotation angle position of each roof camera is fixed, the height of each roof camera can be freely adjusted, and a plurality of roof cameras shoot towards different angles and are spliced into panoramic images by utilizing shot images;
when a coordinate system is established, the camera coordinate system is fixed and changed in the Z-axis direction, and the relation of the roof camera in the pixel coordinate system and the world coordinate system is as follows:
Figure BDA0002377434130000191
in the formula (1) and the formula (2), two-dimensional coordinates of the image are represented, fuAnd fvDenotes the focal length (u) of the camera on the horizontal axis and the vertical axis of the image, respectively, based on the pixel0,(u,v)v0) Representing a center point coordinate of the image, fu, fv representing a focal length of the camera on a pixel basis on a horizontal axis and a vertical axis of the image, and X, Y, Z representing a point of a spatial point in a world coordinate system; x, y and z represent coordinates of the space point in a camera coordinate system, wherein (R and T) are external parameters, R is a rotation matrix, and T is a translation matrix;
step Sb, adjusting the height h of the roof camerasAnd placing a calibration plate around the vehicle body, obtaining known points (u, v) corresponding to pixel coordinates of points (X, Y, Z) under a plurality of world coordinate systems in corresponding roof cameras, calculating external parameter matrixes Rs and Ts under a preset height by using a formula (1), and solving points (X) corresponding to the points (X, Y, Z) under the known coordinate systems in a camera coordinate system by using a formula (2)s,ys,zs);
Step Sc, when the roof camera is adjusted to ascend or descend, according to the geometrical change relation of rotation and translation of the coordinates, when the height of the roof camera is h, the (x) of the space point in the camera coordinate systemh,yh,zh) H compared with the height of the roof camerasWhen it is xh=xs,yh=ysRemains unchanged, zhAnd (4) changing, calculating a formula:
Figure BDA0002377434130000201
k is a proportionality coefficient, e is a constant, and k and e can be calibrated and solved by presetting different camera heights;
step Sd, height of the camera on the roof is h due to the spacesCoordinate of time (x)s,ys,zs) The coordinate point of (2) is known, and the coordinate (x) of the space point at the camera when the height of the roof camera is h is solved by formula (3)h,yh,zh) A point formed by combining a plurality of coordinate points (x) having a height hh,yh,zh) Substituting into formula (2), the coordinates (X, Y, Z) of the space point in the world coordinate system are known, and calculating the external parameters (R) of the camera by using formula (2)h,Th);
A third acquisition mode; when the calibration is carried out, because constraint conditions are adopted, errors exist, and multiple times of calibration can be carried out to obtain an average value in order to make the result more accurate. By adjusting the height h of the roof camera at different preset heightsiRepeating the steps Sa to Sd in the second acquisition mode to acquire a plurality of rotation matrices
Figure BDA0002377434130000211
And translation matrix
Figure BDA0002377434130000212
Then calculating a rotation matrix R with the current height of hhAnd translation matrix ThThe formula is calculated as follows:
Figure BDA0002377434130000213
Figure BDA0002377434130000214
in the formula, n represents the times of calibration, hi represents the height of the roof camera in the ith calibration;
Figure BDA0002377434130000215
and respectively representing the rotation matrix and the ith translation matrix in the ith external parameter matrix corresponding to the situation that the height is h is calculated by adopting the external parameter matrix corresponding to the height hi.
The touch display screen is provided with an area for displaying real-time shooting, when the shooting area needs to be displayed in real time, the non-panoramic display icon is clicked, and the touch area automatically displays the picture shot by each roof camera independently in a split screen mode; when a shooting area needs to be displayed in real time, clicking a panoramic display icon, and displaying a panoramic image in a touch area;
when the vehicle-mounted central control starts the camera to automatically shoot through the control module when the vehicle-mounted central control reaches a preset shooting interval, because the path in the driving process changes in real time, if the vehicle-mounted central control shoots in the whole course, some places are not scenic spots which the user wants to shoot, and at the moment, the user can filter roads which are not shot;
referring to fig. 8, a user sets a shooting interval as required, the interval is a range between a first mark point and a second mark point, when the user drives the automobile to the first mark point quickly, the roof camera starts to perform calibration adjustment at the camera automatic calibration adjustment mark point to prepare for shooting, shooting is performed after the first mark point is reached, but multiple paths exist between the first mark point and the second mark point, such as: if only a first mark point and a second mark point are set, the automobile is shot when the automobile runs between the first mark point and the second mark point, if a user specifies a shooting interval and a shooting path, and the roof camera shoots according to a preset shooting interval and path; if the user specifies a shooting interval and specifies a path not to be shot, the roof camera shoots according to the shooting area and the path not to be shot; when the car roof camera does not shoot, the car roof camera is still in a working state, and only does not store pictures or video data.
In step S3, the method specifically includes: before sending the shot image or video to a cloud server, storing the image or video in a local storage space, wherein the local storage space at least comprises a first partition and a second partition; the first partition is used for circularly storing data, and when the first partition is full of data, new data is overwritten to old data according to the first-in first-out sequence;
the second partition is used for storing the link of the image or the video on the cloud sharing server or the first partition so as to save space, and the data of the second partition is not automatically deleted;
in order to facilitate the viewing of a user, pictures are displayed in groups according to the route of the self-driving tour, the display modes of the groups can be various, automatic matching or user-defined according to the selection of the user, such as the group display according to the time, street name, scenic spot name, roof camera name, route name and the like, the group display method is to add labels in the picture attributes, classify the pictures through the labels and display the total number of the grouped pictures, and referring to fig. 9, the shot picture display mode is displayed on a preset shot map, and the shot picture display mode is displayed according to the shot time sequence, the road name and the corresponding shot picture number are displayed; when a user checks, the user clicks the icon with the preset mark on the map, opens the grouped pictures or the picture links corresponding to the corresponding icon, and checks.
What has been described above is only a preferred embodiment of the present invention, and the present invention is not limited to the above examples. It will be clear to those skilled in the art that the form in this embodiment is not limited thereto, and the manner of adjustment is not limited thereto. It is to be understood that other modifications and variations, which may be directly derived or suggested to one skilled in the art without departing from the basic concept of the invention, are to be considered as included within the scope of the invention.

Claims (13)

1. A use method of a car roof camera system applied to self-driving tourism comprises the following steps: step S1, presetting a shooting interval and a shooting mode for starting the roof camera for shooting according to the self-driving tour route by the user;
step S2, when the vehicle runs to a preset shooting interval, the vehicle-mounted central control automatically starts a camera to shoot according to a preset shooting mode;
in step S1, the preset shooting section is implemented by monitoring data of the positioning module in real time through the vehicle-mounted central control unit, and when the position positioned by the positioning module matches the position of the preset shooting section, the vehicle-mounted central control unit calls the control module to start the roof camera to automatically shoot according to the preset shooting mode.
2. The use method of the car roof camera system applied to the self-driving tourism as claimed in claim 1, wherein the car roof camera is started in advance for debugging before a preset shooting interval is reached;
the pre-starting comprises 1-5 km or 1-10 min ahead of the starting point of a preset shooting interval.
3. The use method of the roof camera system applied to the self-driving tourism as claimed in claim 1, wherein the roof cameras comprise one or more roof cameras, the operation of the roof cameras is called remotely through a control module of a vehicle-mounted central control, and operation icons corresponding to the control module are displayed on a display interface of the touch display screen.
4. The use method of the car roof camera system applied to the self-driving tourism as claimed in claim 3, wherein the display interface of the touch display screen comprises: the method comprises the following steps that a function interface, a roof camera control interface and a height adjusting interface are preset, the preset function interface comprises a return main interface icon, a panoramic icon, a preset shooting mode icon and a non-panoramic icon, and when a user clicks the panoramic icon, a plurality of roof cameras shoot panoramic images according to a preset panoramic mode; when the user clicks the non-panoramic icon, the touch display screen displays a plurality of non-panoramic shooting modes for the user to select.
5. The method as claimed in claim 1, wherein the preset photographing mode comprises: mode 1: shooting left by a plurality of cameras; mode 2: shooting to the right by a plurality of cameras; mode 3: a plurality of cameras shoot forward; mode 4: a plurality of cameras shoot backwards; mode 5: a plurality of cameras shoots rightwards and leftwards; mode 6: a plurality of cameras shoot forward and backward; mode 7: carrying out panoramic shooting; mode 8: in the user-defined mode, a user sets a shooting angle and a scene according to requirements; the user can modify, add or delete the shooting mode according to the requirement.
6. The use method of the car roof camera system applied to the self-driving tourism as claimed in claim 4, wherein the car roof camera control interface comprises the position and the rotation angle of the car roof camera, and the rotation angle of the camera can be adjusted by touching the corresponding car roof camera so that the camera can observe the preset direction;
the height adjusting interface is used for adjusting the ascending or descending height of the roof camera, and the roof camera can be adjusted to ascend or descend by touching the height adjusting interface.
7. The use method of the roof camera system applied to self-driving tourism as claimed in claim 1, wherein the preset shooting mode comprises a panoramic shooting mode, the panoramic image acquired in the panoramic shooting mode needs to be acquired after image splicing, before image splicing, calibration needs to be performed on the roof camera to acquire internal parameters and external parameters, and the acquisition of the external parameters comprises a first acquisition mode, a second acquisition mode or a third acquisition mode;
the first acquisition mode includes: calculating external parameters corresponding to the preset height of the roof camera within the adjustable height range by adjusting the height of the roof camera, and storing the external parameters in an external parameter mapping table; after panoramic photography is carried out, calling an external parameter mapping table to find an external parameter corresponding to the height of the current roof camera;
the second acquisition mode includes:
step Sa, during panoramic shooting, fixing the rotation angle position of each roof camera; and then adjusting the roof camera to a preset height, and when a coordinate system is established, fixing the camera coordinate system in the Z-axis direction to change, wherein the relation of the roof camera in a pixel coordinate system and a world coordinate system is as follows:
Figure FDA0002377434120000021
Figure FDA0002377434120000031
in the formula (1) and the formula (2), two-dimensional coordinates of the image are represented, fuAnd fvDenotes the focal length of the camera on a pixel basis in the horizontal and vertical axes of the image, respectively, (u)0,(u,v)v0) Representing the coordinates of the center point of the image, fu, fv representing the focal length of the camera on a pixel basis on the horizontal axis and the vertical axis of the image, and X, Y, Z representing the points of the spatial points in the world coordinate system; x, y and z represent coordinates of the space point in a camera coordinate system, wherein (R and T) are external parameters, R is a rotation matrix, and T is a translation matrix;
step Sb, adjusting the roof camera to a preset height hsAnd placing a calibration plate around the vehicle body, obtaining known points (u, v) corresponding to pixel coordinates of points (X, Y, Z) under a plurality of world coordinate systems in corresponding roof cameras, calculating external parameter matrixes Rs and Ts under a preset height by using a formula (1), and solving points (X) corresponding to the points (X, Y, Z) under the known coordinate systems in a camera coordinate system by using a formula (2)s,ys,zs);
Step Sc, when the adjusting vehicleWhen the top camera is lifted or lowered, according to the geometrical change relation of rotation and translation of the coordinates, when the height of the top camera is h, the (x) of a space point in a camera coordinate systemh,yh,zh) H compared with the height of the roof camerasWhen it is xh=xs,yh=ysRemains unchanged, zhAnd (3) changing, and calculating the formula:
Figure FDA0002377434120000032
k is a proportionality coefficient, e is a constant, and k and e can be solved by calibrating at preset different camera heights;
step Sd, height of the camera on the roof is h due to the spacesCoordinate of time (x)s,ys,zs) The coordinate point of (2) is known, and the coordinate (x) of the space point at the camera when the height of the roof camera is h is solved by formula (3)h,yh,zh) A point formed by combining a plurality of coordinate points (x) having a height hh,yh,zh) Substituting into formula (2), the coordinates (X, Y, Z) of the space point in the world coordinate system are known, and calculating the external parameter (R) of the camera by using formula (2)h,Th);
A third acquisition mode; by adjusting the height h of the roof camera at different preset heightsiRepeating the steps Sa to Sd in the second acquisition mode to acquire a plurality of rotation matrices
Figure FDA0002377434120000041
And translation matrix
Figure FDA0002377434120000042
Then calculating a rotation matrix R with the current height of hhAnd translation matrix ThThe calculation formula is as follows:
Figure FDA0002377434120000043
Figure FDA0002377434120000044
in the formula, n represents the times of calibration, hi represents the height of the roof camera in the ith calibration;
Figure FDA0002377434120000045
and respectively representing the rotation matrix and the ith translation matrix in the ith external parameter matrix corresponding to the situation that the height is h is calculated by adopting the external parameter matrix corresponding to the height hi.
8. The method of claim 1, further comprising: step S3, the user acquires the picture through the intelligent terminal and shares the picture with a preset authority member for interaction;
step S3 specifically includes: the method comprises the steps that a user obtains a shot picture or video, then selects the video or picture needing to be shared and sends the video or picture to a cloud sharing server, and the cloud sharing server sends the received video or picture to an intelligent terminal of a preset authority member; the intelligent terminals of members with preset authority receive the pictures or videos sent by the users and then check the pictures or videos, and the pictures or videos are operated in the checking process, wherein the operation comprises comment writing, praise, mark adding and doodling;
the cloud sharing server acquires member operation records of preset authority in real time, finds out a plurality of pictures with the top rank according to the number of times of viewing the pictures or videos or the number of comment or praise times of the members with the preset authority, acquires similar pictures through a built-in algorithm and recommends the pictures to the members with the preset authority for viewing.
9. The use method of the roof camera system applied to the self-driving tour as claimed in claim 1, wherein the images or videos captured by the roof camera are stored in the local storage space or temporarily stored in the local storage space and then uploaded to the cloud sharing server, and the local storage space at least comprises the first partition and the second partition; the first partition is used for circularly storing data, and when the first partition is full of data, new data is overwritten to old data according to the first-in first-out sequence;
the second partition is used for storing the link of the image or the video in the cloud sharing server or the first partition, and data in the second partition is not automatically deleted;
the images or videos are displayed in a grouping mode, automatic matching or user self-definition is carried out according to user selection, and grouping display is carried out according to time, street names, scenic spot names or roof camera names or route names; the grouping display method is characterized in that a label is added in the picture attribute, then the picture attribute is classified through the label, and the total number of grouped pictures is displayed.
10. The use method of the car roof camera system applied to self-driving tourism as claimed in claim 8, wherein a self-driving tourism module is arranged in each of the intelligent terminal or the vehicle-mounted central control, and comprises a permission setting module, a photo selection module and a viewing module;
the permission setting module is configured to set a member list capable of viewing the photos and permissions of the members for copying, commenting and editing the photos; the photo selection module is configured to provide a user with a selection of photos or videos to be shared; the viewing module is used for providing the members with the photos or videos shared by the viewing user for viewing.
11. The method of claim 1, wherein the overhead camera system comprises: the system comprises a video acquisition module, an MCU (microprogrammed control Unit) microcontroller containing an MAC (media access control) module and a PHY (physical layer) module of 100base-T1, wherein the video acquisition module is connected with the microcontroller, and the PHY module is connected with the MAC module in the MCU through an RMII (remote management interface);
the video acquisition module is configured to be used for carrying out video image acquisition and compression processing operations, and compressed video image data is transmitted to the MCU module for processing; the video acquisition module comprises a DSP processing chip, and the video image comprises a bayer image.
12. The method for using the car roof camera system applied to the self-driving tourism as claimed in claim 11, wherein a transmission mechanism of the data packet generated by the car roof camera adopts a UDP/IP protocol cluster as a communication carrier, and a message confirmation mechanism is added when an important message is transmitted; the method comprises the following steps of adopting an SOME/IP protocol as a communication control protocol of the whole roof camera, encapsulating a start/stop control function included in the roof camera into an SOME/IP service form, and completing remote calling through vehicle-mounted central control;
the vehicle-mounted Ethernet-based AVB protocol cluster is adopted for transmitting the collected video or image of the vehicle-mounted camera, and comprises an IEEE 802.1AS precise time synchronization protocol, an IEEE 802.1Qav queue and forwarding protocol, an IEEE 802.1Qat stream reservation protocol and an IEEE 1722 audio and video bridging transmission protocol.
13. The use method of the roof camera system applied to the self-driving tourism as claimed in claim 1, characterized in that after the roof camera collects the image, the roof camera transmits the compressed image to the vehicle-mounted central control unit for image decompression processing;
the compression processing of the image comprises encoding the acquired image, the image encoding step comprising: bayer image blocking, image intra-block prediction, image block outer prediction residual error and prediction mode entropy coding;
the Bayer image blocking comprises the steps of equally dividing an original image into image blocks with the same size;
the intra-picture prediction includes: selecting a plurality of preset modes for predicting the gray value of each small block after being partitioned according to the relationship between adjacent values of the small blocks:
the method comprises the steps that a plurality of preset modes are adopted to sequentially obtain a predicted value of each small block, the difference value between the predicted value and a true value is obtained, the difference value is rounded downwards, then the sum of absolute values of the residuals is obtained, and the prediction mode with the minimum absolute value of the residuals is selected as a final coding mode;
prediction mode entropy coding: entropy coding the processed prediction residual error, and entropy coding the pixel residual error by using Huffman coding; and entropy coding the optimal prediction mode of each image block by exponential Golomb coding.
The decompression processing of the image includes: decoding of a compressed image, the decoding of the compressed image comprising: entropy decoding an input code stream to obtain image block prediction residuals and a prediction mode, performing inverse prediction on the prediction residuals of each image block by using the prediction mode, and reconstructing an obtained image pixel block to obtain a complete lossless original bayer image;
the entropy decoding includes: inputting the coding code stream of each image block, and performing Huffman decoding on the coding code stream to obtain a pixel prediction residual error of each block; and carrying out exponential Golomb decoding on the coded code stream consisting of the prediction modes of each block to obtain the optimal prediction modes of different blocks.
Inverse prediction and image reconstruction: sequentially performing reverse prediction on all blocks by using the optimal prediction mode of each block to obtain prediction pixel values of pixels at all positions of all image blocks, adding the prediction values and the decoded residual values, and subtracting 255 from the value with the sum value larger than 255 to finally obtain the gray value of the original image; and sequentially reconstructing each block to obtain an original bayer image.
CN202010071546.1A 2020-01-21 2020-01-21 Application method of car roof camera system applied to self-driving travel Active CN113212305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010071546.1A CN113212305B (en) 2020-01-21 2020-01-21 Application method of car roof camera system applied to self-driving travel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010071546.1A CN113212305B (en) 2020-01-21 2020-01-21 Application method of car roof camera system applied to self-driving travel

Publications (2)

Publication Number Publication Date
CN113212305A true CN113212305A (en) 2021-08-06
CN113212305B CN113212305B (en) 2024-05-31

Family

ID=77085290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010071546.1A Active CN113212305B (en) 2020-01-21 2020-01-21 Application method of car roof camera system applied to self-driving travel

Country Status (1)

Country Link
CN (1) CN113212305B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011087353A2 (en) * 2010-01-15 2011-07-21 Mimos Berhad Apparatus and method for single camera mirrored panoramic imaging
CN102837643A (en) * 2012-09-07 2012-12-26 广东好帮手电子科技股份有限公司 System for realizing panoramic reverse and method for obtaining panoramic reverse image
KR101525224B1 (en) * 2013-12-06 2015-06-04 주식회사 한국인터넷기술원 A portable terminal of having the auto photographing mode
CN105398403A (en) * 2015-11-30 2016-03-16 奇瑞汽车股份有限公司 Automobile
CN106274684A (en) * 2015-06-24 2017-01-04 张烂熳 The automobile panoramic round-looking system that a kind of low-angle photographic head realizes
CN107235008A (en) * 2017-06-16 2017-10-10 上海赫千电子科技有限公司 A kind of vehicle-mounted auxiliary drives panoramic picture system and the method for obtaining panoramic picture
CN109712194A (en) * 2018-12-10 2019-05-03 深圳开阳电子股份有限公司 Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium
CN110113540A (en) * 2019-06-13 2019-08-09 广州小鹏汽车科技有限公司 A kind of vehicle image pickup method, device, vehicle and readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011087353A2 (en) * 2010-01-15 2011-07-21 Mimos Berhad Apparatus and method for single camera mirrored panoramic imaging
CN102837643A (en) * 2012-09-07 2012-12-26 广东好帮手电子科技股份有限公司 System for realizing panoramic reverse and method for obtaining panoramic reverse image
KR101525224B1 (en) * 2013-12-06 2015-06-04 주식회사 한국인터넷기술원 A portable terminal of having the auto photographing mode
CN106274684A (en) * 2015-06-24 2017-01-04 张烂熳 The automobile panoramic round-looking system that a kind of low-angle photographic head realizes
CN105398403A (en) * 2015-11-30 2016-03-16 奇瑞汽车股份有限公司 Automobile
CN107235008A (en) * 2017-06-16 2017-10-10 上海赫千电子科技有限公司 A kind of vehicle-mounted auxiliary drives panoramic picture system and the method for obtaining panoramic picture
CN109712194A (en) * 2018-12-10 2019-05-03 深圳开阳电子股份有限公司 Vehicle-mounted viewing system and its stereo calibration method and computer readable storage medium
CN110113540A (en) * 2019-06-13 2019-08-09 广州小鹏汽车科技有限公司 A kind of vehicle image pickup method, device, vehicle and readable medium

Also Published As

Publication number Publication date
CN113212305B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN113212304A (en) Car roof camera system applied to self-driving tourism
JP7050683B2 (en) 3D information processing method and 3D information processing equipment
US20210329177A1 (en) Systems and methods for video processing and display
KR102343651B1 (en) Generating apparatus and generating method, and reproducing apparatus and playing method
JP2021044849A (en) Video synchronizing apparatus and video synchronizing method
DE102019114720A1 (en) ADVANCED REALITY (AUGMENTED REALITY - AR) REMOVED VEHICLE ASSISTANCE
CN102761702B (en) For method and the imaging system of the image overlap in mobile communication equipment
JP6944138B2 (en) Image processing device and image processing method
CN106971583A (en) A kind of traffic information shared system and sharing method based on vehicle-mounted networking equipment
CN109478347A (en) Image processing apparatus and image processing method
CN1216466C (en) Monitoring alarm system for locating position and image transmission
CN102607579A (en) Vehicle-mounted navigation terminal and system
CN109936702A (en) It cooperates for vehicle between the vehicle of imaging
CN212386404U (en) Car roof camera system applied to self-driving tourism
CN111667603A (en) Vehicle-mounted shooting sharing system and control method thereof
WO2014169582A1 (en) Configuration parameter sending and receiving method and device
CN109496325A (en) Image processing apparatus and image processing method
DE102016120116A1 (en) Method for providing an output image describing a surrounding area of a motor vehicle and networked system
CN114556958A (en) Video transmission method and system, video processing method and device, playing terminal and movable platform
JP7193871B2 (en) System, control method, program, etc.
CA3006652A1 (en) Portable device for multi-stream video recording
CN115348423A (en) Control method, device, equipment, medium and program product for intelligent model vehicle
CN110870293B (en) Video shooting processing method and device and video shooting processing system
JP4324728B2 (en) Imaging apparatus, captured image processing method and program used in the imaging apparatus
CN107317952B (en) Video image processing method and picture splicing method based on electronic map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant