CN115460352A - Vehicle-mounted video processing method, device, equipment, storage medium and program product - Google Patents
Vehicle-mounted video processing method, device, equipment, storage medium and program product Download PDFInfo
- Publication number
- CN115460352A CN115460352A CN202211381789.0A CN202211381789A CN115460352A CN 115460352 A CN115460352 A CN 115460352A CN 202211381789 A CN202211381789 A CN 202211381789A CN 115460352 A CN115460352 A CN 115460352A
- Authority
- CN
- China
- Prior art keywords
- video stream
- vehicle
- display
- display area
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003860 storage Methods 0.000 title claims abstract description 27
- 238000003672 processing method Methods 0.000 title claims description 17
- 238000000034 method Methods 0.000 claims abstract description 63
- 238000012545 processing Methods 0.000 claims abstract description 48
- 230000004044 response Effects 0.000 claims description 56
- 238000004590 computer program Methods 0.000 claims description 17
- 238000005516 engineering process Methods 0.000 abstract description 13
- 238000004891 communication Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000010365 information processing Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mechanical Engineering (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present disclosure relates to the field of image communication technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for processing a vehicle-mounted video. The method comprises the following steps: acquiring a video stream acquired by a vehicle-mounted camera; displaying the video stream through a display area of a vehicle-mounted display screen; responding to the selection of any display area, and displaying the selected display area in a selected state; and responding to the recording request, and generating target image data according to the video stream corresponding to the selected display area.
Description
Technical Field
The present disclosure relates to the field of image communication technologies, and in particular, to a method and an apparatus for processing a vehicle-mounted video, an electronic device, a storage medium, and a program product.
Background
When a person is riding a vehicle, the person usually takes an image with a mobile phone or a camera if the person sees a scene to be photographed through a window. In the process of vehicle driving, conditions such as bumpy road surface often occur, so that people are difficult to shoot satisfactory photos or videos through mobile phones or cameras, beautiful scenery is easy to miss, or shooting needs to be repeated.
Disclosure of Invention
The disclosure provides a technical scheme for processing a vehicle-mounted video.
According to an aspect of the present disclosure, a method for processing a vehicle-mounted video is provided, including:
acquiring a video stream acquired by a vehicle-mounted camera;
displaying the video stream through a display area of a vehicle-mounted display screen;
responding to the selection of any display area, and displaying the selected display area in a selected state;
and responding to the recording request, and generating target image data according to the video stream corresponding to the selected display area.
In a possible implementation manner, the video stream displayed in the display area is a first preset resolution, and the video stream used for generating the target image data is a second preset resolution, where the first preset resolution is lower than the second preset resolution.
In a possible implementation manner, the acquiring a video stream collected by a vehicle-mounted camera includes:
and encoding the original video data acquired by the vehicle-mounted camera into the video stream with the first preset resolution and the video stream with the second preset resolution.
In one possible implementation manner, the vehicle-mounted display screen comprises a recording manner selection control;
the generating of the target image data according to the video stream corresponding to the selected display area in response to the recording request includes:
and responding to the trigger of any recording mode selection control, and generating target image data corresponding to the recording mode according to the video stream corresponding to the selected display area.
In a possible implementation manner, the number of the recording manner selection controls is at least two, and the at least two recording controls can be in a trigger state at the same time.
In a possible implementation manner, the vehicle-mounted display screen includes a plurality of display areas, the plurality of display areas respectively display video streams collected by a plurality of vehicle-mounted cameras, and a relative positional relationship between the plurality of display areas is determined based on a relative positional relationship between shooting areas of the plurality of vehicle-mounted cameras.
In one possible implementation manner, in a case where an intersection region exists between the shooting regions corresponding to the first display region and the second display region in the plurality of display regions, the image information of any sub-region of the intersection region is displayed only through the first display region or only through the second display region.
In one possible implementation manner, for any sub-region of the intersection region, in response to that the distortion degree of the sub-region in the first video stream is lower than that of the sub-region in the second video stream, displaying the image information of the sub-region through the first display region; or, for any sub-region of the intersection region, in response to the distortion degree of the sub-region in the first video stream being higher than or equal to the distortion degree of the sub-region in the second video stream, displaying the image information of the sub-region through the second display region; the first video stream is a video stream acquired by a first vehicle-mounted camera corresponding to the first display area, and the second video stream is a video stream acquired by a second vehicle-mounted camera corresponding to the second display area.
In one possible implementation, the number of the display areas is at least two;
the method further comprises the following steps:
hiding the unselected display areas of the at least two display areas in response to the recording request.
In one possible implementation, the method further includes:
and responding to the recording request, and enlarging the selected display area.
In a possible implementation manner, the generating target video data according to the video stream corresponding to the selected display area includes:
and responding to the fact that the number of the selected display areas is larger than or equal to 2, and generating target image data according to the relative position relation of the shooting areas corresponding to at least two selected display areas.
In one possible implementation manner, the generating target image data according to a relative position relationship between shooting areas corresponding to the at least two selected display areas includes:
and splicing the video streams corresponding to the display areas according to the relative position relation of the shooting areas corresponding to the at least two selected display areas to generate target image data.
In one possible implementation, the in-vehicle display screen includes:
the display screen is arranged in the auxiliary cab and/or the display screen is arranged in the rear seat area.
According to an aspect of the present disclosure, there is provided an apparatus for processing an in-vehicle video, including:
the acquisition module is used for acquiring the video stream acquired by the vehicle-mounted camera;
the first display module is used for displaying the video stream through a display area of the vehicle-mounted display screen;
the second display module is used for responding to the fact that any display area is selected and displaying the selected display area in a selected state;
and the generating module is used for responding to the recording request and generating target image data according to the video stream corresponding to the selected display area.
In a possible implementation manner, the video stream displayed in the display area is a first preset resolution, and the video stream used for generating the target image data is a second preset resolution, where the first preset resolution is lower than the second preset resolution.
In one possible implementation manner, the obtaining module is configured to:
and encoding the original video data acquired by the vehicle-mounted camera into the video stream with the first preset resolution and the video stream with the second preset resolution.
In one possible implementation manner, the vehicle-mounted display screen comprises a recording manner selection control;
the generation module is configured to:
and responding to the trigger of any recording mode selection control, and generating target image data corresponding to the recording mode according to the video stream corresponding to the selected display area.
In a possible implementation manner, the number of the recording manner selection controls is at least two, and the at least two recording controls can be in a trigger state at the same time.
In a possible implementation manner, the vehicle-mounted display screen includes a plurality of display areas, the plurality of display areas respectively display video streams collected by a plurality of vehicle-mounted cameras, and a relative positional relationship between the plurality of display areas is determined based on a relative positional relationship between shooting areas of the plurality of vehicle-mounted cameras.
In one possible implementation manner, in a case where an intersection region exists between the shooting regions corresponding to the first display region and the second display region in the plurality of display regions, the image information of any sub-region of the intersection region is displayed only through the first display region or only through the second display region.
In one possible implementation manner, for any sub-region of the intersection region, in response to that the distortion degree of the sub-region in the first video stream is lower than that of the sub-region in the second video stream, displaying the image information of the sub-region through the first display region; or, for any sub-region of the intersection region, in response to that the distortion degree of the sub-region in the first video stream is higher than or equal to the distortion degree of the sub-region in the second video stream, displaying the image information of the sub-region through the second display region; the first video stream is a video stream acquired by a first vehicle-mounted camera corresponding to the first display area, and the second video stream is a video stream acquired by a second vehicle-mounted camera corresponding to the second display area.
In one possible implementation, the number of the display areas is at least two;
the device further comprises:
and the hiding module is used for responding to the recording request and hiding the unselected display areas in the at least two display areas.
In one possible implementation, the apparatus further includes:
and the amplifying module is used for responding to the recording request and amplifying the selected display area.
In one possible implementation, the generating module is configured to:
and generating target image data according to the relative position relation of the shooting areas corresponding to at least two selected display areas in response to the number of the selected display areas being more than or equal to 2.
In one possible implementation, the generating module is configured to:
and splicing the video streams corresponding to the display areas according to the relative position relation of the shooting areas corresponding to the at least two selected display areas to generate target image data.
In one possible implementation, the vehicle-mounted display screen includes:
the display screen is arranged in the auxiliary cab and/or the display screen is arranged in the rear seat area.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described methods.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, the video stream acquired by the vehicle-mounted camera is acquired, the video stream is displayed through the display area of the vehicle-mounted display screen, the selected display area is displayed in a selected state in response to the fact that any display area is selected, and the target image data is generated according to the video stream corresponding to the selected display area in response to the recording request, so that when a user is in the vehicle cabin, the image and/or video can be shot by using the video stream acquired by the vehicle-mounted camera without using external shooting equipment such as a mobile phone or a camera, the operation difficulty of shooting the image and/or video outside and/or inside the vehicle cabin can be reduced, and the quality of the shot image and/or video can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a processing method of a vehicle-mounted video provided by an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a physical link between a vehicle-mounted camera and a vehicle-mounted display screen in a method for processing a vehicle-mounted video provided by an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating a processing method of a vehicle-mounted video according to an embodiment of the present disclosure.
Fig. 4 shows a schematic view of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video according to the embodiment of the disclosure.
Fig. 5 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure.
Fig. 6 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure.
Fig. 7 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure.
Fig. 8 shows a block diagram of a processing apparatus for a vehicle-mounted video provided in an embodiment of the present disclosure.
Fig. 9 shows a block diagram of an electronic device 800 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of a, B, and C, and may mean including any one or more elements selected from the group consisting of a, B, and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The embodiment of the disclosure provides a processing method of a vehicle-mounted video, a processing device of the vehicle-mounted video, an electronic device, a storage medium and a program product, wherein a video stream acquired by a vehicle-mounted camera is acquired, the video stream is displayed through a display area of a vehicle-mounted display screen, the selected display area is displayed in a selected state in response to the fact that any display area is selected, and target image data is generated in response to a recording request according to the video stream corresponding to the selected display area, so that when a user is in a vehicle cabin, the image and/or video can be shot by the video stream acquired by the vehicle-mounted camera without external photographic equipment such as a mobile phone or a camera, the operation difficulty of shooting images and/or videos outside and/or inside the vehicle cabin can be reduced, and the quality of the shot images and/or videos can be improved.
The following describes in detail a processing method of a vehicle-mounted video provided by an embodiment of the present disclosure with reference to the drawings.
Fig. 1 shows a flowchart of a processing method of a vehicle-mounted video provided by an embodiment of the present disclosure. In a possible implementation manner, the execution subject of the processing method of the vehicle-mounted video may be a processing device of the vehicle-mounted video, for example, the processing method of the vehicle-mounted video may be executed by a vehicle-mounted device or other electronic devices. The vehicle-mounted device may be a vehicle machine, a domain controller, or a processor in a vehicle cabin, and may also be a device host used for performing data processing operations such as images in a DMS (Driver Monitoring System) or an OMS (Occupant Monitoring System). In some possible implementations, the processing method of the in-vehicle video may be implemented by a processor calling a computer readable instruction stored in a memory. As shown in fig. 1, the processing method of the in-vehicle video includes steps S11 to S14.
In step S11, a video stream captured by the in-vehicle camera is acquired.
In step S12, the video stream is displayed through a display area of the in-vehicle display screen.
In step S13, in response to selection of any display region, the selected display region is displayed in a selected state.
In step S14, in response to the recording request, target image data is generated according to the video stream corresponding to the selected display area.
In the embodiment of the present disclosure, the number of the vehicle-mounted cameras may be one or more than two. In one possible implementation, the number of the vehicle-mounted cameras may be at least two, and the at least two vehicle-mounted cameras may belong to at least one camera type. In one possible implementation, the at least two onboard cameras may belong to at least two camera types.
In one possible implementation, the camera types may be divided according to mounting location and/or function. For example, the camera types may include a front view camera, a surround view camera, a rear view camera, a side view camera, a built-in camera, and the like.
Wherein the number of the front view cameras of the vehicle may be at least one. The forward looking camera may be mounted in or out of the vehicle cabin, for example, on the front windshield. The video stream collected by the Forward-looking camera can be used for performing at least one video analysis of Lane Departure Warning (LDW), pedestrian Collision Warning (PCW), forward Collision Warning (FCW), preceding vehicle Departure Warning, traffic Sign Recognition (TSR), traffic Light Recognition (TLR), and the like, without limitation.
The number of the look-around cameras of the vehicle may be at least two. For example, the number of look-around cameras may be 4 to 8. The look-around camera can be installed outside the cabin. For example, the look-around camera may be mounted at a logo, a left rear view mirror, a right rear view mirror, etc. The panoramic camera may be a wide-angle camera. For example, the panoramic camera may be further classified into a forward fisheye camera, a left fisheye camera, a right fisheye camera, a backward fisheye camera, and the like. Each panoramic camera can be calibrated before leaving the factory so as to realize accurate panoramic stitching. The video stream collected by the panoramic camera is the basis for realizing panoramic parking. For example, a video stream acquired by a panoramic camera can be used for displaying a panoramic looking-around function and can be used for fusing visual perception of a parking function and target detection so as to reduce a blind field of a driver and improve parking safety and driving safety.
The number of the rear view cameras of the vehicle may be at least one, and the rear view cameras may be installed outside the cabin. For example, a rear view camera may be mounted in the rear tail box. Based on the video stream collected by the rear-view camera, the image for backing a car can be displayed to assist in parking.
The number of the side-view cameras of the vehicle may be at least one, and the side-view cameras may be installed outside the cabin. The side-looking camera can also be divided into a side-looking forward camera and a side-looking rear camera. The side front view camera can be arranged at a position below a B column or a rearview mirror and the like, and the side rear view camera can be arranged at a position of a front fender of a vehicle and the like. The video stream captured by the side forward looking camera can be used to detect side-to-side vehicles and pedestrians. The video stream collected by the side rear-view camera can be used in application scenes such as lane changing and converging into other roads.
The number of the built-in cameras can be at least one, and the built-in cameras can be installed in a vehicle cabin. For example, the built-in camera may be mounted in an interior rearview mirror, an a-pillar, a dashboard, center console, etc. The video stream collected by the built-in camera can be used for monitoring the state of a driver or other passengers in the cabin, and for example, the functions of fatigue reminding and the like can be realized by monitoring the state of the driver.
In the embodiment of the disclosure, all or part of the video stream collected by the vehicle-mounted camera can be acquired. For example, at least two video streams captured by at least two in-vehicle cameras may be acquired in real time.
In the embodiment of the disclosure, under the condition that at least two video streams collected by at least two vehicle-mounted cameras are obtained, the at least two video streams can be respectively displayed through at least two display areas. The video streams collected by the vehicle-mounted cameras can be respectively displayed in different display areas, or the video streams collected by the related vehicle-mounted cameras can be displayed in the same display area. In a possible implementation manner, in a case that the at least two cameras include a surround-view camera group, video streams collected by a plurality of surround-view cameras in the surround-view camera group may be spliced into a panoramic video stream, and the panoramic video stream may be displayed through a single display area. That is, the video streams captured by the plurality of around cameras in the around camera group may be displayed in the same display area. The user can switch the viewing angle by interacting with the vehicle-mounted display screen. For example, in the case that the in-vehicle display screen is a touch screen, the user can switch the viewing angle by touching.
In a possible implementation manner, at least two video streams collected by at least two vehicle-mounted cameras can be acquired, the at least two video streams are respectively displayed through at least two display areas of a vehicle-mounted display screen, the selected display area is displayed in a selected state in response to the fact that any display area in the at least two display areas is selected, and target image data are generated according to the video stream corresponding to the selected display area in response to a recording request. Hereinafter, an example of "acquiring at least two video streams collected by at least two vehicle-mounted cameras and displaying the at least two video streams respectively through at least two display areas of a vehicle-mounted display screen" will be described.
In one possible implementation, the in-vehicle display screen includes: the display screen is arranged in the auxiliary cab and/or the display screen is arranged in the rear seat area.
In the implementation mode, the video stream acquired by the vehicle-mounted camera is acquired, the video stream is displayed through a display area of a vehicle-mounted display screen arranged in a passenger compartment, the selected display area is displayed in a selected state in response to the fact that any display area is selected, and target image data are generated according to the video stream corresponding to the selected display area in response to a recording request, so that a passenger in the passenger compartment can conveniently complete image and/or video shooting by using the video stream acquired by the vehicle-mounted camera, the passenger in the passenger compartment does not need to take out a mobile phone or a camera of the passenger compartment to shoot, the operation difficulty of shooting images and/or videos outside and/or inside the passenger compartment can be reduced, and the quality of the shot images and/or videos can be improved; the method comprises the steps of acquiring a video stream acquired by a vehicle-mounted camera, displaying the video stream through a display area of a vehicle-mounted display screen arranged in a rear seat area, responding to selection of any display area, displaying the selected display area in a selected state, responding to a recording request, and generating target image data according to the video stream corresponding to the selected display area, so that rear passengers can complete image and/or video shooting by using the video stream acquired by the vehicle-mounted camera conveniently, the rear passengers do not need to take own mobile phones or cameras for shooting, the operation difficulty of shooting images and/or videos outside and/or inside a vehicle cabin can be reduced, and the quality of the shot images and/or videos is improved.
In one possible implementation, the in-vehicle display screen includes: and the display screen is arranged in the main cab. In this implementation, when the vehicle is in a stationary state, the driver may complete the capturing of multi-view images and/or videos using the video stream captured by the onboard camera.
Fig. 2 is a schematic diagram illustrating a physical link between a vehicle-mounted camera and a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure. In the example shown in fig. 2, the multi-camera module includes a plurality of in-vehicle cameras, and the multi-camera module may include a Complementary Metal-Oxide-Semiconductor (CMOS) sensor and an Image Signal Processor (ISP). The video stream collected by each vehicle-mounted camera can be serialized through a serializer respectively to obtain serial data, and then the serial data is transmitted to a deserializer at the domain controller end. The Serial data transmission may be performed based on GMSL (Gigabit Multimedia Serial Links) or FAKARA harnesses, for example. The deserializer at the domain controller end may transmit the deserialized data to a System on Chip (SoC) of the domain controller through an MIPI (Mobile Industry Processor Interface). The system-level chip can be powered by a whole vehicle power supply. And the system level chip processes the video streams collected by the plurality of cameras. For example, in the case where the multi-camera module includes a panoramic camera group, the splicing of the panoramic video stream may be performed by a system on a chip. The System on chip may also support a high-speed Controller Area Network (CAN) and/or a UDS (universal Diagnostics System), and the like, which is not limited herein. After the system-level chip processes the video streams collected by the plurality of cameras, the processed video data can be serialized through the serializer and transmitted to the deserializer at the vehicle-mounted display screen end. In fig. 2, the vehicle-mounted display screen comprises a main driving central control screen, a secondary driving entertainment screen and a rear row entertainment screen. As shown in fig. 2, the plurality of vehicle-mounted display screens can simplify the use of the serializer in a daisy chain manner, so that the plurality of vehicle-mounted display screens can display different contents.
Fig. 3 is a schematic diagram illustrating a processing method of a vehicle-mounted video according to an embodiment of the present disclosure. In the example shown in fig. 3, the system on chip in the domain controller can obtain the video streams collected by the left camera, the right camera, the front camera, the rear camera, the around-looking camera and the in-vehicle camera.
In a possible implementation manner, the video stream displayed in the display area is a first preset resolution, and the video stream used for generating the target image data is a second preset resolution, where the first preset resolution is lower than the second preset resolution.
In this implementation, the displaying the video stream through the display area of the vehicle-mounted display screen may include: displaying a video stream with a first preset resolution through a display area of a vehicle-mounted display screen; the generating target image data according to the video stream corresponding to the selected display area in response to the recording request may include: and responding to the recording request, and generating target image data according to the video stream with the second preset resolution corresponding to the selected display area.
In the implementation mode, the video stream with the first preset resolution is displayed through the display area of the vehicle-mounted display screen, so that the real-time performance of video stream display can be improved; and generating target image data according to the video stream with the second preset resolution corresponding to the selected display area in response to the recording request, so that the quality of the generated target image data can be improved. Therefore, according to this embodiment, both the real-time performance of video stream display and the quality of generated target video data can be achieved.
As an example of this implementation manner, the acquiring a video stream collected by a vehicle-mounted camera includes: and encoding the original video data acquired by the vehicle-mounted camera into the video stream with the first preset resolution and the video stream with the second preset resolution. For example, for any one of at least two vehicle-mounted cameras, the raw video data collected by the vehicle-mounted camera can be encoded into a video stream of a first preset resolution and a video stream of a second preset resolution.
In the example shown in fig. 3, the system on chip may provide the video stream of the first preset resolution and the video stream of the second preset resolution to an in-vehicle Application (APP). The video stream with the first preset resolution can be used for displaying on-vehicle display screens (such as a passenger screen or a rear-row screen). The video stream of the second preset resolution may be used to generate at least one of a target image, a target image sequence, a target video, and the like.
In the example shown in fig. 3, the system on chip may also implement functions such as blind spot detection, pedestrian warning, parking assistance, panoramic around view, driver monitoring assistance, and the like.
By adopting this example, both the real-time property of video stream display and the quality of generated target video data can be satisfied.
As one example of this implementation, the aspect ratio of the first preset resolution and the second preset resolution is the same. For example, the aspect ratios of the first preset resolution and the second preset resolution are both 4; as another example, the aspect ratio of the first preset resolution and the second preset resolution is 16; and so on. For example, the first preset resolution may be 540P (i.e., 960 × 540) and the second preset resolution may be 1080P (i.e., 1920 × 1080). Of course, a person skilled in the art may flexibly set the first preset resolution and the second preset resolution according to the requirements of the actual application scenario, which is not limited herein.
In one example, the system on chip may obtain raw video data collected by the left-hand camera, and may encode the raw video data collected by the left-hand camera into a 540P video stream and a 1080P video stream; the system-level chip can acquire original video data acquired by the vehicle right camera and encode the original video data acquired by the vehicle right camera into a 540P video stream and a 1080P video stream; the system-level chip can acquire original video data acquired by the front camera and encode the original video data acquired by the front camera into a 540P video stream and a 1080P video stream; the system level chip can acquire original video data acquired by the vehicle rear camera and can encode the original video data acquired by the vehicle rear camera into a 540P video stream and a 1080P video stream; the system-level chip can acquire original video data acquired by the all-around camera and encode the original video data acquired by the all-around camera into a 540P video stream and a 1080P video stream; the system-on-chip can acquire original video data acquired by the in-vehicle camera and can encode the original video data acquired by the in-vehicle camera into a 540P video stream and a 1080P video stream.
By encoding the original video data collected by the vehicle-mounted camera into a video stream with a first preset resolution and a video stream with a second preset resolution, which have the same aspect ratio, the user can conveniently apply the modification operation (such as a cropping operation) on the video stream with the first preset resolution through the display area to the video stream with the second preset resolution, so that the calculation amount of vehicle-mounted video processing can be reduced.
In another possible implementation manner, the acquiring at least two video streams collected by at least two vehicle-mounted cameras includes: and for any one of the at least two vehicle-mounted cameras, only one video stream with preset resolution is obtained. For example, a 1080P video stream may be acquired separately for each in-vehicle camera.
In a possible implementation manner, before the at least two video streams are displayed through the vehicle-mounted display screen, picture correction can be further performed on at least part of the at least two video streams. For example, the fish-eye effect can be removed by using a fish-eye picture correction algorithm to obtain a head-up picture.
In one possible implementation, at least one of a color temperature, a white balance, an exposure level, and the like of the at least two video streams may also be unified before the at least two video streams are displayed through the in-vehicle display screen.
In the embodiment of the present disclosure, the preview content of the vehicle-mounted display screen may be transmitted based on an AVB (Audio Video Bridging) protocol of the vehicle-mounted ethernet. Or, in case of coaxial line connection, MIPI signals may be directly transmitted.
In a possible implementation manner, the vehicle-mounted display screen includes a plurality of display areas, the plurality of display areas respectively display video streams collected by a plurality of vehicle-mounted cameras, and a relative positional relationship between the plurality of display areas is determined based on a relative positional relationship between shooting areas of the plurality of vehicle-mounted cameras.
For example, the at least two video streams include a first video stream and a second video stream, and the at least two display areas include a first display area and a second display area, where the first video stream and the second video stream are any two video streams of the at least two video streams, the first display area is a display area corresponding to the first video stream, and the second display area is a display area corresponding to the second video stream; and determining the relative position relationship between the first display area and the second display area according to the relative position relationship between the first shooting area corresponding to the first video stream and the second shooting area corresponding to the second video stream. In this implementation, the first display area may be used to display the first video stream, and the second display area may be used to display the second video stream. The first photographing region may represent a photographing region corresponding to the first video stream, and the second photographing region may represent a photographing region corresponding to the second video stream.
Fig. 4 is a schematic view illustrating a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video according to the embodiment of the disclosure. In the example shown in fig. 4, the in-vehicle camera picture may represent a picture of a video stream captured by an in-vehicle camera, the front camera picture may represent a picture of a video stream captured by a front camera, the roof surround view camera picture may represent a spliced picture of a plurality of video streams captured by a plurality of surround view cameras, the left camera picture may represent a picture of a video stream captured by a left camera of the vehicle, the rear camera picture may represent a picture of a video stream captured by a rear camera of the vehicle, and the right camera picture may represent a picture of a video stream captured by a right camera of the vehicle.
The shooting area corresponding to the video stream collected by the front camera is arranged in front of the shooting area corresponding to the video stream collected by the rear camera, so that the display area corresponding to the front camera can be arranged above the display area corresponding to the rear camera. The shooting area corresponding to the video stream collected by the left camera of the vehicle is on the left side of the shooting area corresponding to the video stream collected by the camera behind the vehicle, so that the display area corresponding to the left camera of the vehicle can be arranged on the left side of the display area corresponding to the camera behind the vehicle. The shooting region corresponding to the video stream collected by the vehicle right camera is on the right side of the shooting region corresponding to the video stream collected by the vehicle rear camera, and therefore the display region corresponding to the vehicle right camera can be arranged on the right side of the display region corresponding to the vehicle rear camera.
In this implementation manner, the relative positional relationship between the plurality of display areas is determined based on the relative positional relationship between the shooting areas of the plurality of vehicle-mounted cameras, so that a more natural preview effect can be achieved, and a better viewing experience can be obtained for a user.
As an example of this implementation, in response to an intersection area existing between a first shooting area corresponding to the first video stream and a second shooting area corresponding to the second video stream, a relative positional relationship between a first display area corresponding to the first video stream and a second display area corresponding to the second video stream may be determined according to a relative positional relationship between the first shooting area and the second shooting area.
In this example, in a case where there is an intersection area between the first photographing area and the second photographing area, the same picture information exists in the first video stream and the second video stream.
In this example, by responding to the existence of the intersection region between the first shooting region corresponding to the first video stream and the second shooting region corresponding to the second video stream, the relative positional relationship between the first display region corresponding to the first video stream and the second display region corresponding to the second video stream is determined according to the relative positional relationship between the first shooting region and the second shooting region, thereby being capable of facilitating the user to preview the splicing effect of the multi-view images and/or videos.
As another example of this implementation, when there is no intersection region between the first shooting region corresponding to the first video stream and the second shooting region corresponding to the second video stream, the relative positional relationship between the first display region corresponding to the first video stream and the second display region corresponding to the second video stream may be determined according to the relative positional relationship between the first shooting region and the second shooting region.
As an example of this implementation, in a case where there is an intersection region between imaging regions corresponding to a first display region and a second display region among the plurality of display regions, image information of any sub-region of the intersection region is displayed only by the first display region or only by the second display region. In this example, by displaying the image information of any sub-region of the intersection region only in the first display region or only in the second display region, repeated display can be reduced, and the user can preview the stitching effect more conveniently.
In one example, for any sub-region of the intersection region, in response to a distortion degree of the sub-region in a first video stream being lower than a distortion degree of the sub-region in a second video stream, displaying image information of the sub-region through the first display region; or, for any sub-region of the intersection region, in response to that the distortion degree of the sub-region in the first video stream is higher than or equal to the distortion degree of the sub-region in the second video stream, displaying the image information of the sub-region through the second display region; the first video stream is a video stream acquired by a first vehicle-mounted camera corresponding to the first display area, and the second video stream is a video stream acquired by a second vehicle-mounted camera corresponding to the second display area. In this example, the preview effect can be improved by displaying the image information with a small degree of distortion corresponding to any one of the sub-regions in the intersection region.
In another example, the image information of the intersection region may be randomly displayed in the first display region or the second display region.
In another example, the image information of the intersection area may be displayed through the first display area and the second display area, respectively. In this example, the first display area may display the complete first video stream and the second display area may display the complete second video stream.
In another possible implementation manner, when the positions of the respective display regions are set, the relative positional relationship between the shooting regions of the video streams corresponding to the respective display regions may not be considered.
In the embodiment of the present disclosure, the sizes of different display areas in the vehicle-mounted display screen may be the same or different.
In one possible implementation, the aspect ratio of the display area may be a fixed ratio. For example, the pixel size of the display area may be 960x540. In this implementation manner, when the aspect ratio of the video frame of the video stream captured by any one of the in-vehicle cameras is different from the aspect ratio of the display area, the display area may be filled in by using a black border or the like.
In another possible implementation manner, for any vehicle-mounted camera, the aspect ratio of the display area corresponding to the vehicle-mounted camera may be determined according to the aspect ratio of the video frame of the video stream acquired by the vehicle-mounted camera, so that the aspect ratio of the display area corresponding to the vehicle-mounted camera is equal to the aspect ratio of the video frame of the video stream acquired by the vehicle-mounted camera.
In the disclosed embodiments, different ones of the at least two display regions may be in a selected state at the same time. That is, the target video data may be generated based on the video streams corresponding to the selected two or more display regions. The data type of the target video data may be an image, an image sequence, a video, or the like, and is not limited herein.
In the embodiment of the present disclosure, the user may select the display area by touching or voice. As shown in FIG. 4, in one example, the user may select the display area by clicking on a circle in the upper right corner of the display area.
In the embodiment of the present disclosure, the recording request may be generated by triggering the recording mode selection control, or may be generated by a voice instruction, and the like. In the voice recognition mode, the vehicle-mounted camera can be linked with a voice assistant of the vehicle machine, and the vehicle-mounted camera can be controlled to shoot or record the pictures through a voice instruction of shooting or recording the pictures. For example, the user may issue a voice command "small X and small X, video recording to the front and right", and control the processing device of the in-vehicle video to generate the target video based on the video stream captured by the front camera and the video stream captured by the right camera. For another example, the user may send a voice command of "small X and small X to record a video ahead", and control the processing device of the vehicle-mounted video to generate the target video based on the video stream collected by the camera in front of the vehicle.
In one possible implementation manner, the vehicle-mounted display screen may display a single recording mode selection control, and the recording mode selection control may be used to trigger at least one recording mode. In this implementation manner, in response to detecting the trigger operation for the recording manner selection control, the recording manner may be determined according to the type of the trigger operation. For example, in fig. 4, the in-vehicle display screen may display a shooting/recording control, may generate a recording request for requesting shooting in response to detecting a one-click operation on the shooting/recording control, may generate a recording request for requesting recording in response to detecting a long-press operation on the shooting/recording control, and so on.
In one possible implementation, the number of the display areas is at least two; the method further comprises the following steps: and hiding the unselected display area in the at least two display areas in response to the recording request. In this implementation manner, the unselected display areas in the at least two display areas are hidden in response to the recording request, so that interference of the unselected display areas on a user can be reduced, and the user experience of taking and/or recording a picture by using a video stream acquired by a vehicle-mounted camera can be improved.
In another possible implementation manner, after the recording request is detected, the non-selected display area of the at least two display areas may not be hidden.
In one possible implementation, the method further includes: and responding to the recording request, and enlarging the selected display area. In this implementation, the selected display area is enlarged in response to the recording request, so that the user can obtain a better preview effect through the limited screen space of the vehicle-mounted display screen.
Fig. 5 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure. In the example shown in fig. 5, the selected display areas include a display area corresponding to the front camera (i.e., a display area corresponding to the front camera screen) and a display area corresponding to the left camera (i.e., a display area corresponding to the left camera screen). As shown in fig. 5, in response to the recording request, the display area corresponding to the front camera and the display area corresponding to the left camera may be enlarged, and the display areas corresponding to the other cameras may be hidden. In addition, the recording of the video can be stopped in response to the ending control being triggered.
In one possible implementation manner, the vehicle-mounted display screen comprises a recording manner selection control; the generating of the target image data according to the video stream corresponding to the selected display area in response to the recording request includes: and responding to the trigger of any recording mode selection control, and generating target image data corresponding to the recording mode according to the video stream corresponding to the selected display area.
In this implementation, the recording mode selection control may represent a control for selecting a recording mode. The vehicle-mounted display screen can comprise at least one recording mode selection control. As an example of this implementation, the in-vehicle display screen may include at least two recording mode selection controls. For example, the recording mode may include at least one of taking a single photo, continuously taking multiple photos, recording a video, delaying photographing, and the like, and accordingly, the recording mode selection control may include at least one of a taking a single photo control, continuously taking multiple photos control, recording a video control, delaying photographing control, and the like.
In this implementation manner, in response to the triggering of any recording manner selection control, a recording request may be generated according to the recording manner corresponding to the recording manner selection control.
In the implementation mode, the target image data of different image types can be generated according to different recording modes selected by a user, so that the flexibility of generating the target image data can be improved, and the preference of the user for the image types can be met.
As an example of this implementation manner, the number of the recording manner selection controls is at least two, and the at least two recording controls can be in a trigger state at the same time.
In this example, at least two recording mode selection controls corresponding to at least two recording modes one to one may be displayed through the vehicle-mounted display screen, where the at least two recording mode selection controls may be simultaneously in an on state, that is, the at least two recording controls may be simultaneously in a trigger state. The recording request can be generated according to the recording mode corresponding to the recording mode selection control in response to the starting instruction of any one of the at least two recording mode selection controls.
For example, the shooting control and the video recording control can be displayed through a vehicle-mounted display screen; for another example, a control for taking a single photo, a control for continuously taking a plurality of photos, a control for recording a video and a control for delayed shooting can be displayed through a vehicle-mounted display screen; and so on.
Fig. 6 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure. In the example shown in fig. 6, the photographing control and the recording control are displayed through the vehicle-mounted display screen, where the photographing control and the recording control may be simultaneously activated. For example, after the user selects the display area corresponding to the left camera picture and the display area corresponding to the right camera picture, the user clicks the recording control, and the video can be recorded according to the video stream collected by the left camera of the vehicle and the video stream collected by the right camera of the vehicle. In the process of recording the video, if the user sees an interested landscape and wants to take a picture, the user can click the picture taking control so as to take a picture according to the video stream acquired by the left camera of the vehicle and the video stream acquired by the right camera of the vehicle.
In the implementation mode, at least two recording mode selection controls which correspond to at least two recording modes one to one are displayed through the vehicle-mounted display screen, wherein the at least two recording mode selection controls can be in a trigger state at the same time, so that the contradiction that a user wants to take a video and a photo when seeing a scenery which the user wants to take can be solved, for example, when the user wants to take a photo when recording a video according to a video stream acquired by a vehicle-mounted camera, the user does not need to take out a mobile phone or a camera of the user to take the photo, and the user requirements can be further met.
In a possible implementation manner, the generating target video data according to the video stream corresponding to the selected display area includes: and generating target image data according to the relative position relation of the shooting areas corresponding to at least two selected display areas in response to the number of the selected display areas being more than or equal to 2.
For example, the selected display area includes a display area corresponding to the front camera and a display area corresponding to the rear camera, wherein the shooting area corresponding to the video stream collected by the front camera is in front of the shooting area corresponding to the video stream collected by the rear camera, and therefore, in the target image data, the image information corresponding to the front camera may be above the image information corresponding to the rear camera.
For another example, the selected display area includes a display area corresponding to the left camera and a display area corresponding to the rear camera, where the shooting area corresponding to the video stream collected by the left camera is on the left side of the shooting area corresponding to the video stream collected by the rear camera, and therefore, in the target image data, the image information corresponding to the left camera may be on the left side of the image information corresponding to the rear camera.
For another example, the selected display area includes a display area corresponding to the right camera and a display area corresponding to the rear camera, where the shooting area corresponding to the video stream collected by the right camera is on the right side of the shooting area corresponding to the video stream collected by the rear camera, and therefore, in the target image data, the image information corresponding to the right camera may be on the right side of the image information corresponding to the rear camera.
In this implementation, the target video data is generated according to the relative positional relationship between the shooting areas corresponding to the at least two selected display areas in response to the number of the selected display areas being greater than or equal to 2, so that the generated target video data can be more harmonious and natural.
As an example of this implementation manner, the generating target image data according to the relative positional relationship between the shooting areas corresponding to the at least two selected display areas includes: and splicing the video streams corresponding to the display areas according to the relative position relation of the shooting areas corresponding to the at least two selected display areas to generate target image data.
In this example, the target video data is generated by splicing the video streams corresponding to the display regions based on the relative positional relationship between the shooting regions corresponding to the at least two selected display regions, whereby more natural target video data can be obtained.
As an example of this implementation, the generating target video data according to the relative positional relationship between the shooting areas corresponding to the at least two selected display areas includes: and in response to the intersection area existing between the shooting areas corresponding to any two of the at least two selected display areas, splicing the video streams corresponding to the display areas with the intersection area to generate target image data.
For example, an intersection area exists between a shooting area corresponding to a video stream captured by a left camera of a vehicle and a shooting area corresponding to a video stream captured by a camera in front of the vehicle. If two at least selected display areas include the display area that left camera of car corresponds and the display area that the camera of the car corresponds before, then can splice the video stream that left camera of car gathered and the video stream that the camera gathered before the car gathered.
For another example, an intersection region exists between a shooting region corresponding to a video stream acquired by a left camera of the vehicle and a shooting region corresponding to a video stream acquired by a camera behind the vehicle. If two at least selected display areas include the display area that the left camera of car corresponds and the display area that the camera corresponds behind the car, then can splice the video stream that the camera was gathered behind the car and the video stream that the camera was gathered behind the car to the video stream that the left camera of car gathered.
For another example, an intersection area exists between a shooting area corresponding to a video stream acquired by a right camera of the vehicle and a shooting area corresponding to a video stream acquired by a camera of the vehicle front. If two at least selected display areas include the display area that the right camera of car corresponds and the display area that the camera of car front corresponds, then can splice the video stream that the camera was gathered to the right camera of car and the video stream that the camera was gathered before the car.
For another example, an intersection region exists between a shooting region corresponding to a video stream acquired by a right camera of the vehicle and a shooting region corresponding to a video stream acquired by a camera of the rear of the vehicle. If two at least selected display areas include the display area that the camera corresponds behind the display area that the camera corresponds on the right side of the car and the display area that the camera corresponds behind the car, then can splice the video stream that the camera was gathered on the right side of the car and the video stream that the camera was gathered behind the car.
For another example, an intersection region exists between a shooting region corresponding to a video stream acquired by a camera in front of the vehicle and a shooting region corresponding to a video stream acquired by a camera in back of the vehicle. If two at least selected display areas include the display area that the camera corresponds before the car and the display area that the camera corresponds behind the car, then can splice the video stream that the camera gathered before the car and the video stream that the camera gathered behind the car.
In this example, in the case where the number of selected display areas is at least three, the number of spliced video streams may also be three or more. For example, a video stream collected by a front camera, a video stream collected by a left camera and a video stream collected by a rear camera can be spliced; for another example, a video stream collected by the front camera, a video stream collected by the right camera and a video stream collected by the rear camera can be spliced; for another example, the video stream collected by the left camera of the vehicle, the video stream collected by the front camera of the vehicle and the video stream collected by the right camera of the vehicle can be spliced; for another example, the video stream collected by the left camera of the vehicle, the video stream collected by the camera behind the vehicle, and the video stream collected by the right camera of the vehicle can be spliced; and so on.
In this example, in response to an intersection area existing between the imaging areas corresponding to any two of the at least two selected display areas, the target video data is generated by splicing the video streams corresponding to the display areas in which the intersection area exists, thereby obtaining seamless wide-angle video data.
As another example of this implementation, the generating target video data according to the relative positional relationship between the shooting areas corresponding to the at least two selected display areas includes: and in response to that an intersection area exists between the shooting areas corresponding to any two of the at least two selected display areas and the focal lengths and the models of the two vehicle-mounted cameras corresponding to the two display areas with the intersection area are the same, splicing the video streams corresponding to the two display areas with the intersection area to generate target image data. Because the pictures of the vehicle-mounted cameras with different focal lengths in the same shooting area may not be aligned, and the brightness, the color and the like of the vehicle-mounted cameras with different models may not be consistent, in response to that an intersection area exists between the shooting areas corresponding to any two display areas in the at least two selected display areas, and the focal lengths and the models of the two vehicle-mounted cameras corresponding to the two display areas with the intersection area are the same, the video streams corresponding to the two display areas with the intersection area are spliced to generate target image data, and therefore the target image data with higher quality can be generated.
As another example of the implementation manner, the generating target image data according to the relative position relationship between the shooting areas corresponding to the at least two selected display areas includes: and in response to that an intersection area exists between the shooting areas corresponding to any two of the at least two selected display areas and the focal lengths of the two vehicle-mounted cameras corresponding to the two display areas with the intersection area are the same, splicing the video streams corresponding to the two display areas with the intersection area to generate target image data.
As another example of the implementation manner, the generating target image data according to the relative position relationship between the shooting areas corresponding to the at least two selected display areas includes: and in response to that an intersection area exists between the shooting areas corresponding to any two display areas in the at least two selected display areas and the models of the two vehicle-mounted cameras corresponding to the two display areas with the intersection area are the same, splicing the video streams corresponding to the two display areas with the intersection area to generate target image data.
As another example of the implementation manner, the splicing control may be displayed through the vehicle-mounted display screen, and in response to the splicing control being turned on, and an intersection area exists between the shooting areas corresponding to any two of the at least two selected display areas, the video streams corresponding to the display areas where the intersection area exists are spliced, so as to generate the target image data. Fig. 7 shows another schematic diagram of a display interface of a vehicle-mounted display screen in the method for processing a vehicle-mounted video provided by the embodiment of the disclosure. In the example shown in FIG. 7, the splice control is displayed via an in-vehicle display screen.
In one possible implementation, after the generating the target image data, the method further includes: generating a file name corresponding to the target image data according to the azimuth information of the vehicle-mounted camera corresponding to the target image data; and storing the target image data according to the file name.
As an example of this implementation, the file name of the target video data may further include at least one of a generation date of the target video data, a generation time of the target video data, and the like. For example, if the target video data is "20220808_183059 _. Mp4", it may indicate that the target video data is video information captured by a front camera. For another example, if the target image data is "20220808_183059_ front _ rear.mp 4", it may indicate that the target image data includes image information captured by a front camera and image information captured by a rear camera.
In the implementation mode, the file name corresponding to the target image data is generated according to the azimuth information of the vehicle-mounted camera corresponding to the target image data, and the target image data is stored according to the file name, so that a user can conveniently and quickly search the target image data.
In a possible implementation manner, after the target image data is generated, the length or size desired by the user may be cut according to the user requirement, or a filter or the like may be added, which is not limited herein.
In one possible implementation, after the generating the target image data, the method further includes: and responding to a sharing request aiming at the target image data, and sharing the target image data. For example, the target video data may be shared through Wi-Fi (wireless fidelity) or the like.
As an example of this implementation, the target video data may be shared to a specified terminal device in response to a sharing request for the target video data. For example, the designated terminal device may be a user's cell phone or the like.
As another example of this implementation, the target video data may be shared to a specified social platform in response to a sharing request for the target video data.
The processing method of the vehicle-mounted video provided by the embodiment of the disclosure can be applied to the technical fields of vehicle-mounted, photography, video streaming, image processing and the like, and is not limited herein.
The following describes a processing method of a vehicle-mounted video provided by the embodiment of the present disclosure through a specific application scenario. In this application scenario, as shown in fig. 7, the vehicle-mounted display screen may include a display area corresponding to the camera in the vehicle, a display area corresponding to the camera in front of the vehicle, a display area corresponding to the around-looking camera, a display area corresponding to the left camera in the vehicle, a display area corresponding to the camera behind the vehicle, and a display area corresponding to the right camera in the vehicle, and the vehicle-mounted display screen may display the shooting/recording control and the splicing control. The user may select the display area by clicking on the circle in the upper right corner of the display area. A recording request for requesting photographing may be generated in response to detecting a single-click operation for the photographing/recording control, and a recording request for requesting recording may be generated in response to detecting a long-press operation for the photographing/recording control. After the target image data is generated, the cutting of the time length or the size can be carried out according to the requirements of the user, and a filter can be added. In addition, the target image data can be shared to a mobile phone or a social platform of the user in response to the sharing request.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product for a vehicle-mounted video, which can be used to implement any one of the processing methods for a vehicle-mounted video provided by the present disclosure, and corresponding technical solutions and technical effects can be referred to in corresponding descriptions of the method sections and are not described again.
Fig. 8 shows a block diagram of a processing apparatus for in-vehicle video provided by an embodiment of the present disclosure. As shown in fig. 8, the processing apparatus for the in-vehicle video includes:
the acquisition module 21 is configured to acquire a video stream acquired by the vehicle-mounted camera;
the first display module 22 is used for displaying the video stream through a display area of the vehicle-mounted display screen;
the second display module 23 is configured to, in response to selection of any display area, display the selected display area in a selected state;
and the generating module 24 is configured to generate target image data according to the video stream corresponding to the selected display area in response to the recording request.
In a possible implementation manner, the video stream displayed in the display area is a first preset resolution, and the video stream used for generating the target image data is a second preset resolution, where the first preset resolution is lower than the second preset resolution.
In a possible implementation manner, the obtaining module 21 is configured to:
and encoding the original video data acquired by the vehicle-mounted camera into the video stream with the first preset resolution and the video stream with the second preset resolution.
In one possible implementation manner, the vehicle-mounted display screen comprises a recording manner selection control;
the generating module 24 is configured to:
and responding to the trigger of any recording mode selection control, and generating target image data corresponding to the recording mode according to the video stream corresponding to the selected display area.
In a possible implementation manner, the number of the recording manner selection controls is at least two, and the at least two recording controls can be in a trigger state at the same time.
In a possible implementation manner, the vehicle-mounted display screen includes a plurality of display areas, the plurality of display areas respectively display video streams collected by a plurality of vehicle-mounted cameras, and a relative positional relationship between the plurality of display areas is determined based on a relative positional relationship between shooting areas of the plurality of vehicle-mounted cameras.
In one possible implementation manner, in a case where an intersection region exists between the shooting regions corresponding to the first display region and the second display region in the plurality of display regions, the image information of any sub-region of the intersection region is displayed only through the first display region or only through the second display region.
In a possible implementation manner, for any sub-area of the intersection area, in response to that the distortion degree of the sub-area in the first video stream is lower than that of the sub-area in the second video stream, displaying the image information of the sub-area through the first display area; or, for any sub-region of the intersection region, in response to that the distortion degree of the sub-region in the first video stream is higher than or equal to the distortion degree of the sub-region in the second video stream, displaying the image information of the sub-region through the second display region; the first video stream is a video stream acquired by a first vehicle-mounted camera corresponding to the first display area, and the second video stream is a video stream acquired by a second vehicle-mounted camera corresponding to the second display area.
In one possible implementation, the number of the display areas is at least two;
the device further comprises:
and the hiding module is used for responding to the recording request and hiding the unselected display areas in the at least two display areas.
In one possible implementation, the apparatus further includes:
and the amplifying module is used for responding to the recording request and amplifying the selected display area.
In one possible implementation, the generating module 24 is configured to:
and responding to the fact that the number of the selected display areas is larger than or equal to 2, and generating target image data according to the relative position relation of the shooting areas corresponding to at least two selected display areas.
In one possible implementation, the generating module 24 is configured to:
and splicing the video streams corresponding to the display areas according to the relative position relation of the shooting areas corresponding to the at least two selected display areas to generate target image data.
In one possible implementation, the in-vehicle display screen includes:
the display screen is arranged in the auxiliary cab and/or the display screen is arranged in the rear seat area.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for concrete implementation and technical effects, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
Embodiments of the present disclosure also provide a computer program, which includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 9 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure. For example, the electronic device 800 may be a vehicle, a domain controller or a processor in a vehicle cabin, and may also be a device host used in a DMS or an OMS for performing data processing operations such as image processing, and the like.
Referring to fig. 9, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The input/output interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as wireless network (Wi-Fi), second generation mobile communication technology (2G), third generation mobile communication technology (3G), fourth generation mobile communication technology (4G), long term evolution of universal mobile communication technology (LTE), fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the disclosure are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight different aspects of the various embodiments that are the same or similar, which can be referenced with one another and therefore are not repeated herein for brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, a product applying the technical scheme of the embodiment of the disclosure clearly informs personal information processing rules before processing the personal information, and obtains personal autonomous consent. If the technical scheme of the embodiment of the disclosure relates to sensitive personal information, a product applying the technical scheme of the embodiment of the disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization in the modes of pop-up window information or asking the person to upload personal information thereof and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (17)
1. A processing method of vehicle-mounted video is characterized by comprising the following steps:
acquiring a video stream acquired by a vehicle-mounted camera;
displaying the video stream through a display area of a vehicle-mounted display screen;
in response to the selection of any display area, displaying the selected display area in a selected state;
and responding to the recording request, and generating target image data according to the video stream corresponding to the selected display area.
2. The method of claim 1, wherein the video stream displayed in the display area is of a first predetermined resolution, and the video stream used for generating the target video data is of a second predetermined resolution, wherein the first predetermined resolution is lower than the second predetermined resolution.
3. The method according to claim 2, wherein the acquiring the video stream captured by the vehicle-mounted camera comprises:
and encoding the original video data acquired by the vehicle-mounted camera into the video stream with the first preset resolution and the video stream with the second preset resolution.
4. The method of claim 1, wherein the vehicle-mounted display screen includes a recording mode selection control;
the generating of the target image data according to the video stream corresponding to the selected display area in response to the recording request includes:
and responding to the trigger of any recording mode selection control, and generating target image data corresponding to the recording mode according to the video stream corresponding to the selected display area.
5. The method according to claim 4, wherein the number of the recording mode selection controls is at least two, and the at least two recording controls can be simultaneously in a trigger state.
6. The method according to any one of claims 1 to 5, wherein the on-board display screen comprises a plurality of display areas, the plurality of display areas respectively display video streams captured by a plurality of on-board cameras, and the relative positional relationship between the plurality of display areas is determined based on the relative positional relationship between the shooting areas of the plurality of on-board cameras.
7. The method according to claim 6, wherein, in a case where there is an intersection region between the photographing regions corresponding to a first display region and a second display region among the plurality of display regions, the image information of any sub-region of the intersection region is displayed only by the first display region or only by the second display region.
8. The method of claim 7,
for any sub-region of the intersection region, responding to the fact that the distortion degree of the sub-region in a first video stream is lower than that of the sub-region in a second video stream, and displaying image information of the sub-region through the first display region; or, for any sub-region of the intersection region, in response to the distortion degree of the sub-region in the first video stream being higher than or equal to the distortion degree of the sub-region in the second video stream, displaying the image information of the sub-region through the second display region; the first video stream is a video stream acquired by a first vehicle-mounted camera corresponding to the first display area, and the second video stream is a video stream acquired by a second vehicle-mounted camera corresponding to the second display area.
9. The method according to any one of claims 1 to 4, wherein the number of display areas is at least two;
the method further comprises the following steps:
and hiding the unselected display area in the at least two display areas in response to the recording request.
10. The method according to any one of claims 1 to 4, further comprising:
and responding to the recording request, and enlarging the selected display area.
11. The method according to any one of claims 1 to 4, wherein the generating target image data according to the video stream corresponding to the selected display area comprises:
and generating target image data according to the relative position relation of the shooting areas corresponding to at least two selected display areas in response to the number of the selected display areas being more than or equal to 2.
12. The method according to claim 11, wherein generating the target image data according to the relative position relationship between the capturing areas corresponding to the at least two selected display areas comprises:
and splicing the video streams corresponding to the display areas according to the relative position relation of the shooting areas corresponding to the at least two selected display areas to generate target image data.
13. The method of any of claims 1 to 4, wherein the in-vehicle display screen comprises:
the display screen is arranged in the copilot cab and/or the display screen is arranged in the rear seat area.
14. An apparatus for processing a vehicle-mounted video, comprising:
the acquisition module is used for acquiring the video stream acquired by the vehicle-mounted camera;
the first display module is used for displaying the video stream through a display area of the vehicle-mounted display screen;
the second display module is used for responding to the fact that any display area is selected and displaying the selected display area in a selected state;
and the generating module is used for responding to the recording request and generating target image data according to the video stream corresponding to the selected display area.
15. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any one of claims 1 to 13.
16. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 13.
17. A computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the method of any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211381789.0A CN115460352B (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted video processing method, device, equipment, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211381789.0A CN115460352B (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted video processing method, device, equipment, storage medium and program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115460352A true CN115460352A (en) | 2022-12-09 |
CN115460352B CN115460352B (en) | 2023-04-07 |
Family
ID=84310713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211381789.0A Active CN115460352B (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted video processing method, device, equipment, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115460352B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2006100941A4 (en) * | 2006-11-03 | 2006-11-30 | Panayiotis Moraitopoulos | Vehicle recording system |
CN101231764A (en) * | 2007-11-06 | 2008-07-30 | 赵志 | Automobile image video recording system |
CN208291083U (en) * | 2018-05-11 | 2018-12-28 | 宝沃汽车(中国)有限公司 | In-vehicle display system and vehicle |
CN109947313A (en) * | 2019-02-21 | 2019-06-28 | 贵安新区新特电动汽车工业有限公司 | Vehicle-carrying display screen divides display methods and device |
CN110087123A (en) * | 2019-05-15 | 2019-08-02 | 腾讯科技(深圳)有限公司 | Video file production method, device, equipment and readable storage medium storing program for executing |
CN111710393A (en) * | 2020-04-28 | 2020-09-25 | 视联动力信息技术股份有限公司 | Data transmission method, device, terminal equipment and storage medium |
CN114915745A (en) * | 2021-02-07 | 2022-08-16 | 华为技术有限公司 | Multi-scene video recording method and device and electronic equipment |
-
2022
- 2022-11-07 CN CN202211381789.0A patent/CN115460352B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2006100941A4 (en) * | 2006-11-03 | 2006-11-30 | Panayiotis Moraitopoulos | Vehicle recording system |
CN101231764A (en) * | 2007-11-06 | 2008-07-30 | 赵志 | Automobile image video recording system |
CN208291083U (en) * | 2018-05-11 | 2018-12-28 | 宝沃汽车(中国)有限公司 | In-vehicle display system and vehicle |
CN109947313A (en) * | 2019-02-21 | 2019-06-28 | 贵安新区新特电动汽车工业有限公司 | Vehicle-carrying display screen divides display methods and device |
CN110087123A (en) * | 2019-05-15 | 2019-08-02 | 腾讯科技(深圳)有限公司 | Video file production method, device, equipment and readable storage medium storing program for executing |
CN111710393A (en) * | 2020-04-28 | 2020-09-25 | 视联动力信息技术股份有限公司 | Data transmission method, device, terminal equipment and storage medium |
CN114915745A (en) * | 2021-02-07 | 2022-08-16 | 华为技术有限公司 | Multi-scene video recording method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115460352B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8400507B2 (en) | Scene selection in a vehicle-to-vehicle network | |
US8345098B2 (en) | Displayed view modification in a vehicle-to-vehicle network | |
CN108382305B (en) | Image display method and device and vehicle | |
CN102450007A (en) | Image processing apparatus, electronic apparatus, and image processing method | |
KR20120118073A (en) | Vehicle periphery monitoring device | |
JP6816769B2 (en) | Image processing equipment and image processing method | |
JP6816768B2 (en) | Image processing equipment and image processing method | |
JP2005184395A (en) | Method, system and apparatus for image processing, and photographing equipment | |
CN103496339B (en) | A kind of display system by 3D display automobile panorama and its implementation | |
JP2023046953A (en) | Image processing system, mobile device, image processing method, and computer program | |
KR20150002994A (en) | Apparatus Reliably Providing Vehicle Around Image | |
US11671700B2 (en) | Operation control device, imaging device, and operation control method | |
CN117827997A (en) | Map rendering method, map updating device and server | |
CN115460352B (en) | Vehicle-mounted video processing method, device, equipment, storage medium and program product | |
CN116743943A (en) | Inter-domain video stream data sharing system, method, equipment and medium | |
US11070714B2 (en) | Information processing apparatus and information processing method | |
JP2020150295A (en) | Vehicle crime prevention device | |
US20160167581A1 (en) | Driver interface for capturing images using automotive image sensors | |
CN107458299B (en) | Vehicle lamp control method and device and computer readable storage medium | |
CN116101174A (en) | Collision reminding method and device for vehicle, vehicle and storage medium | |
CN114013367B (en) | High beam use reminding method and device, electronic equipment and storage medium | |
CN112954291B (en) | Method, device and storage medium for processing 3D panoramic image or video of vehicle | |
CN114475436A (en) | User-defined vehicle-mounted image setting method and device, vehicle-mounted equipment and storage medium | |
JP2023046965A (en) | Image processing system, moving device, image processing method, and computer program | |
CN109429042B (en) | Surrounding visual field monitoring system and blind spot visual field monitoring image providing method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |