WO2022105758A1 - 道路识别方法以及装置 - Google Patents

道路识别方法以及装置 Download PDF

Info

Publication number
WO2022105758A1
WO2022105758A1 PCT/CN2021/130988 CN2021130988W WO2022105758A1 WO 2022105758 A1 WO2022105758 A1 WO 2022105758A1 CN 2021130988 W CN2021130988 W CN 2021130988W WO 2022105758 A1 WO2022105758 A1 WO 2022105758A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
dmsdp
service
road
control command
Prior art date
Application number
PCT/CN2021/130988
Other languages
English (en)
French (fr)
Inventor
冷烨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US18/253,700 priority Critical patent/US20240125603A1/en
Publication of WO2022105758A1 publication Critical patent/WO2022105758A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present application relates to the field of computer technology, and in particular, to a road identification method and device.
  • FIG. 1 shows a schematic diagram of an example vehicle-mounted panoramic imaging.
  • multiple cameras can be installed on the front, rear, left, and right sides of the car to take pictures in different directions, showing the driver's view of the outside of the car and providing assistance to the driver. And warn the driver of obstacles in the path that may not be immediately visible.
  • the vehicle-mounted panoramic imaging technology only displays the road environment conditions near the vehicle and cannot be positioned.
  • In-vehicle navigation applies GPS navigation to vehicle navigation through commercial communication satellites to provide navigation services for car drivers.
  • the GPS car navigation system can navigate the car on urban roads, suburban roads, and even rare deserts, Gobi, grasslands and other areas, avoiding the trouble caused by drivers being unfamiliar with road conditions.
  • a navigation application that uses satellites to perform a series of services such as positioning and guidance on terminal hardware brings great convenience to travel.
  • navigation applications can provide services such as route recommendation between two places, road condition prompts, and real-time positioning. It has the advantages of high global coverage, fast, time-saving, high efficiency, wide application, multi-function, and mobile positioning.
  • FIGS. 2a and 2b respectively show an example schematic diagram of the user manually selecting the lane. As shown in Figure 2a, when the vehicle is near the viaduct, the lower left corner of the display shows that the user can manually select whether to The icon (control) on the bridge, as shown in Fig.
  • the display can further display the control that the user can choose whether to be on the auxiliary road (lower right of Fig. 2b).
  • the auxiliary road lower right of Fig. 2b.
  • a road recognition method and device which can locate an accurate road in combination with real-time pictures around the vehicle, and can still navigate correctly in the case of complex road conditions, and does not require manual selection by the user, and can achieve correct navigation. At the same time, eliminate security risks.
  • an embodiment of the present application provides a road identification method, the method is applied to a first device, the first device includes two or more virtual cameras HAL, and the method includes:
  • the first device When monitoring the road identification request, the first device sends a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service; wherein the control command is used to control the physical camera to shoot the current picture;
  • the first device receives, through the DMSDP service, the current picture returned by the physical camera in response to the control command;
  • the first device identifies the road where the first device is located according to the current screen and road information obtained by the local navigation application.
  • the first device acquires the hardware parameters of the physical camera through the DMSDP service; the first device locally configures the corresponding hardware parameters according to the hardware parameters of the physical camera virtual camera.
  • the hardware parameter includes a camera capability parameter of a physical camera
  • the first device locally stores a local camera according to the hardware parameter of the physical camera
  • Configuring the corresponding virtual camera includes: creating, by the first device, a virtual camera hardware abstraction layer HAL and a camera framework corresponding to the physical camera according to the camera capability parameter, and adding the camera framework to the distributed camera framework.
  • the DMSDP service includes a command pipeline and a data pipeline
  • Sending, by the first device, a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service includes: the first device sending a control command to the physical camera corresponding to the virtual camera through a command pipeline of the DMSDP service;
  • the receiving, by the first device, the current picture returned by the physical camera in response to the control command through the DMSDP service includes: the first device receiving the physical camera in response to the control command through a data pipeline of the DMSDP service The current screen returned.
  • the first device identifies the road on which the first device is located according to the current screen and road information obtained by a local navigation application, including:
  • the camera HAL of the local camera of the first device stitches a plurality of the current pictures to obtain a stitched image
  • the navigation application of the first device identifies the road where the first device is located according to the stitched image and the road information obtained by the navigation application.
  • the physical camera is a vehicle-mounted camera.
  • an embodiment of the present application provides a method for acquiring an image, the method comprising:
  • the second device receives a control command sent by the first device through the DMSDP service, wherein the control command is used to control the second device to shoot a current picture;
  • the second device turns on the camera according to the control command and shoots the current picture
  • the second device sends the current picture to the first device.
  • the method further includes:
  • the second device When receiving the configuration request sent by the first device, the second device sends the hardware parameters of the physical camera of the second device to the first device through the DMSDP service.
  • an embodiment of the present application provides a road identification device, the device is applied to a first device, the first device includes two or more virtual cameras HAL, and the device includes:
  • control module configured to send a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service when a road identification request is detected; wherein, the control command is used to control the physical camera to shoot a current picture;
  • a first receiving module configured to receive the current picture returned by the physical camera in response to the control command through a DMSDP service
  • the identification module is configured to identify the road where the first device is located according to the current picture and the road information obtained by the local navigation application.
  • the apparatus further includes:
  • an acquisition module configured to acquire hardware parameters of the physical camera through the DMSDP service
  • the configuration module is configured to locally configure the corresponding virtual camera according to the hardware parameters of the physical camera.
  • the hardware parameters include camera capability parameters of the physical camera
  • the configuration module includes:
  • a configuration unit configured to create a virtual camera hardware abstraction layer HAL and a camera framework corresponding to the physical camera according to the camera capability parameters, and add the camera framework to the distributed camera framework.
  • the DMSDP service includes a command pipeline and a data pipeline
  • the control module includes: a control unit, configured to send a control command to the physical camera corresponding to the virtual camera through the command pipeline of the DMSDP service;
  • the first receiving module includes: a receiving unit, configured to receive the current picture returned by the physical camera in response to the control command through a data pipeline of the DMSDP service.
  • the identification module includes:
  • a splicing unit used for splicing a plurality of the current pictures through the camera HAL of the local camera to obtain a spliced image
  • the identification unit is configured to identify the road where the first device is located according to the stitched image and the road information obtained by the navigation application through the navigation application.
  • the physical camera is a vehicle-mounted camera.
  • an embodiment of the present application provides an apparatus for acquiring an image, the apparatus comprising:
  • a second receiving module configured to receive a control command sent by the first device through the DMSDP service, wherein the control command is used to control the second device to shoot a current picture
  • a shooting module used to open the camera and shoot the current picture according to the control command
  • a first sending module configured to send the current picture to the first device.
  • the apparatus further includes:
  • the second sending module is configured to send the hardware parameters of the physical camera of the second device to the first device through the DMSDP service when receiving the configuration request sent by the first device.
  • an embodiment of the present application provides a data transmission device, including:
  • memory for storing processor-executable instructions
  • the processor is configured to implement the method described in any one of the implementation manners of the first aspect or implement the method described in any one of the implementation manners of the second aspect when executing the instructions.
  • embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, characterized in that, when the computer program instructions are executed by a processor, the first aspect is implemented.
  • an embodiment of the present application provides a terminal device, which can execute the first aspect or one or more of the road identification methods in multiple possible implementation manners of the first aspect.
  • embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic
  • the processor in the electronic device executes the first aspect or one or more of the road identification methods in the multiple possible implementation manners of the first aspect.
  • an embodiment of the present application provides a terminal device, and the terminal device can execute the above-mentioned second aspect or one or more of the image acquisition methods in multiple possible implementation manners of the second aspect.
  • embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic
  • the processor in the electronic device executes the second aspect or one or more of the image acquisition methods in the multiple possible implementation manners of the second aspect.
  • FIG. 1 shows a schematic diagram of an example vehicle-mounted panoramic imaging.
  • Figures 2a and 2b respectively show an exemplary schematic diagram of a user manually selecting a lane.
  • FIG. 3 shows a schematic diagram of an application scenario of a road identification method according to an embodiment of the present application.
  • FIG. 4 shows a block diagram of a device configuration in an application scenario according to an embodiment of the present application.
  • FIG. 5 shows a flowchart of a road identification method according to an embodiment of the present application.
  • 6a and 6b respectively show schematic diagrams of scenarios in which a road identification request is monitored according to an example of the present application.
  • FIG. 7 shows a flowchart of a method for acquiring an image according to an embodiment of the present application.
  • FIG. 8 shows a flowchart of a road identification method according to an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of an application scenario according to an embodiment of the present application.
  • FIG. 10 shows an interaction diagram between a first device and a second device according to an embodiment of the present application.
  • FIG. 11 shows a schematic diagram of a road recognition result according to an embodiment of the present application.
  • FIG. 12 shows a block diagram of a road identification device according to an embodiment of the present application.
  • FIG. 13 shows a block diagram of an apparatus for acquiring an image according to an embodiment of the present application.
  • FIG. 14 shows a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • FIG. 15 shows a block diagram of a software structure of a terminal device according to an embodiment of the present application.
  • the surrounding pictures are photographed by a mobile phone, and an accurate road is located according to the photographed pictures and road information obtained by a navigation application.
  • this method is not suitable for the process of vehicle driving, and the multiple pictures taken may be from different locations, which is not conducive to accurate positioning.
  • the present application provides a road identification method.
  • the road identification method of the embodiments of the present application can realize the communication between a mobile phone (or other terminal device, such as a tablet, a smart watch, etc.) and multiple vehicle cameras,
  • the mobile phone controls the on-board camera to capture the current picture around the vehicle, and sends the current picture back to the mobile phone.
  • the mobile phone integrates the current picture and the road information obtained by the navigation application to determine the accurate road information.
  • the road identification method of the embodiment of the present application can locate an accurate road in combination with the real-time pictures around the vehicle, and can still navigate correctly in the case of complex road conditions.
  • the navigation application of the terminal it can also locate the accurate road in combination with the real-time picture around the vehicle, and does not require the user to manually select it, which can achieve correct navigation and eliminate potential safety hazards.
  • FIG. 3 shows a schematic diagram of an application scenario of a road identification method according to an embodiment of the present application.
  • the application scenario may include a first device, and the first device may be a terminal device such as a mobile phone, a Pad, and a smart watch.
  • the application scenario may also include one or more than two second devices, and the second devices may be a car camera including a camera, a mobile phone, a Pad, a smart watch, and the like.
  • the first device may be a mobile phone
  • the second device may be a vehicle-mounted camera.
  • a communication connection is established between the first device and the second device, and data can be directly transmitted between the first device and the second device.
  • a DMSDP full name in English: distribute mobile sensing development platform; full name in Chinese: distributed fusion perception platform
  • the camera interfaces provided by the DMSDP service of the first device and the second device can be connected to establish a cross-device dual-view mode scene, and commands and data can be transmitted between the first device and the second device through the DMSDP service.
  • the camera interface provided by the DMSDP service may be the CameraKit interface.
  • the first device and the second device may communicate according to the DMSDP protocol.
  • DMSDP services can include command pipes and data pipes.
  • the first device may send a configuration request to the second device through the DMSDP service, where the configuration request is used to acquire hardware parameters of the physical camera of the second device.
  • the first device may send a configuration request to the second device through the command pipeline of the DMSDP service.
  • the second device receives the above configuration request, and in response to the configuration request, the second device can send the hardware parameters of the physical camera of the second device to the first device in the form of a standard interface through the DMSDP service.
  • the DMSDP service can be used.
  • the data pipeline sends the hardware parameters of the physical camera.
  • the first device receives the hardware parameters of the physical camera of the second device through the DMSDP service, obtains the hardware parameters of the physical camera of the second device by reading the standard interface, and locally configures the corresponding hardware parameters according to the hardware parameters of the physical camera of the second device. virtual camera.
  • the hardware parameters may include camera capability parameters of the physical camera
  • the first device may create a virtual camera hardware abstraction layer HAL (Hardware Abstraction Layer) and a camera frame corresponding to the physical camera according to the camera capability parameters, and add the camera frame to the distributed camera framework.
  • HAL Hardware Abstraction Layer
  • the virtual camera HAL is a hardware abstraction layer for the physical camera of the second device.
  • HAL is an interface layer between the operating system kernel and the hardware circuit. It manages the interface accessed by each hardware in a module manner. Each hardware module corresponds to a dynamic link library file. Therefore, the camera HAL is the access interface of the camera.
  • the camera capability parameters may include aperture parameters, focal length parameters, etc. of the camera, and may also include high dynamic range HDR (High Dynamic Range) capability, portrait capability, super night scene capability, and the like.
  • HDR High Dynamic Range
  • the first device may control the virtual camera to correspond to the physical camera of the second device by controlling the virtual camera HAL.
  • the first device can turn on the virtual camera HAL locally to turn on the corresponding physical camera of the second device to shoot, and can also turn off the corresponding physical camera of the second device by using the local virtual camera HAL.
  • the first device may locally include one or more than two virtual camera HALs, and one virtual camera HAL corresponds to a physical camera of the second device.
  • the first device can virtualize the two or more physical cameras locally to create a corresponding virtual camera HAL.
  • the second device may also be a mobile phone, and one virtual camera HAL on the first device corresponds to multiple physical cameras (front camera, rear camera) on the second device.
  • the present application does not limit the types of operating systems (Operating System, OS for short) of the first device and the second device.
  • the first device can use Android OS, Hongmeng OS, Linux OS and other operating systems
  • the second device can also use Android OS, Hongmeng OS, Linux OS, Lite OS and other operating systems.
  • FIG. 4 shows a block diagram of a device configuration in an application scenario according to an embodiment of the present application.
  • the first device can communicate with multiple second devices through the DMSDP service.
  • Multiple virtual camera HALs can be configured on the first device.
  • One virtual camera HAL corresponds to a physical camera of the second device.
  • Two devices may include one or more physical cameras.
  • the first device may also be configured with a distributed camera framework, and when the first device receives the hardware parameters of the physical camera of the second device, it may also create a corresponding The camera frame, and the camera frame is added to the distributed camera frame.
  • the distributed camera framework includes camera frameworks corresponding to multiple virtual camera HALs, and a control module.
  • the control module can be connected to multiple camera frameworks and is responsible for dynamic control of pipelines, equipment collaborative management, and so on.
  • Establish a DMSDP service between the first device and the second device establish a virtual camera HAL of the physical camera of the second device locally on the first device, simulate the control of a real ISP (Internet Service Provider, Internet Service Provider), and implement camera commands and Data across devices.
  • ISP Internet Service Provider, Internet Service Provider
  • the road information obtained by the application accurately identifies the exact road where the first device is located.
  • FIG. 5 shows a flowchart of a road identification method according to an embodiment of the present application. As shown in FIG. 5 , the road identification method of the present application may include the following steps:
  • Step S50 when monitoring the road identification request, the first device sends a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service; wherein the control command is used to control the physical camera to shoot the current picture;
  • Step S51 the first device receives the current picture returned by the physical camera in response to the control command through the DMSDP service
  • Step S52 the first device identifies the road where the first device is located according to the current screen and the road information obtained by the local navigation application.
  • the road identification request may be triggered in various situations.
  • a road recognition control may be displayed on the display interface, and when an operation that triggers the road recognition space is detected, a road recognition request may be generated, and the first device may monitor to the road identification request.
  • 6a and 6b respectively show schematic diagrams of scenarios in which a road identification request is monitored according to an example of the present application.
  • a road recognition control can be displayed on the display interface of the opened navigation application.
  • the user can manually click on the road recognition space due to the complicated road conditions to trigger road recognition.
  • the first device A road identification request may be generated when a triggering operation of the user is detected.
  • the trigger method of the above embodiment does not require the user to select specific road information, can eliminate navigation errors caused by wrong subjective understanding, can realize correct navigation, and can reduce the potential safety hazard caused by distraction of the user selecting the road to a certain extent.
  • the first device may preset the road conditions that need to generate the road identification request through the navigation application, so that the first device generates the road identification request when monitoring that the current road meets the road conditions, The road identification request is thus detected.
  • the road condition may be a road scene indicating that a road identification request needs to be generated
  • the road scene may be a scene when driving into an overpass, when driving on a road including a main road and an auxiliary road, multiple road loops, etc., as shown in Figure 6b As shown, it is located at the overpass and the road includes multiple roads including main road and auxiliary road, and the road condition is more complicated.
  • the first device can obtain the road scene where the device is located by using the location information obtained by positioning and the road information obtained by the navigation application, and then determine whether a road identification request needs to be generated according to the road scene.
  • the road scenes that need to generate the road identification request are consistent with each other, that is, the preset road conditions are met, and at this time, the first device can generate the road identification request. For example, as shown in Figure 6b, when the vehicle is about to enter an overpass, the first device can automatically generate a road identification request. And when the distance to the overpass is less than the distance threshold, a road identification request is generated.
  • the triggering method of the above-mentioned embodiment can realize correct navigation without any operation by the user, and eliminate potential safety hazards.
  • the first device may issue control commands to multiple virtual camera HALs through the distributed camera framework, and control the physical cameras corresponding to the virtual camera HALs by controlling the multiple virtual camera HALs .
  • the control command may include processes such as opening, shooting, and closing the camera.
  • the virtual camera HAL of the first device receives the control command, it can send sub-control commands to the physical camera corresponding to the virtual camera HAL according to the control command, for example, the opening command, Shooting orders and more.
  • the DMSDP service may include a command pipe and a data pipe, the command pipe is used for transmitting control commands, and the data pipe is used for transmitting image data. Therefore, the first device may send a control command to the physical camera corresponding to the virtual camera through the command pipeline of the DMSDP service.
  • FIG. 7 shows a flowchart of a method for acquiring an image according to an embodiment of the present application. As shown in Figure 7, the following steps can be included:
  • Step S70 the second device receives a control command sent by the first device through the DMSDP service, wherein the control command is used to control the second device to shoot the current picture;
  • Step S71 the second device turns on the camera according to the control command and shoots the current picture
  • Step S72 the second device sends the current picture to the first device.
  • the second device After the second device receives the control command through the DMSDP server, it can call the camera API (Application Programming Interface) interface according to the control command, and then send the control command to the camera HAL through the camera frame, and control the opening of the physical camera through the camera HAL , shooting, closing, etc.
  • the camera API Application Programming Interface
  • the second device may send the current picture to the first device through the data pipeline of the DMSDP service. Therefore, the first device may receive the current picture returned by the physical camera in response to the control command through the data pipeline of the DMSDP service.
  • the first device can acquire the current picture from multiple physical cameras. After receiving the multiple current pictures, the first device identifies the road where the first device is located according to the multiple current pictures and road information obtained by the local navigation application.
  • the virtual camera HAL of the first device can send the current image to the camera HAL of the local camera, and the camera HAL of the local camera uses the camera algorithm to detect the current image.
  • a stitched image is obtained by stitching multiple current pictures. Since the local virtual camera HAL only stores the camera capabilities of the corresponding physical camera and does not include camera algorithms, the virtual camera HAL of the first device can send the current image to the camera HAL of the local camera after receiving the current image, and the local The camera HAL of the camera uses the camera algorithm to stitch the current picture.
  • the camera HAL of the local camera After the camera HAL of the local camera obtains the stitched image, it can be transmitted to the local navigation application, and the local navigation application of the first device identifies the specific road where the first device is located by combining the stitched image and the road information obtained by the navigation application. Among them, the navigation application can obtain road information from the cloud server.
  • the road identification method of the embodiment of the present application can realize the communication between the mobile phone and a plurality of vehicle-mounted cameras, control the vehicle-mounted camera through the mobile phone to capture the current picture around the vehicle, and transmit the current picture back to the mobile phone, and the mobile phone can obtain the current picture and the navigation application by integrating the current picture. road information to determine accurate road information.
  • the road identification method of the embodiment of the present application can locate an accurate road in combination with the real-time pictures around the vehicle, and can still navigate correctly in the case of complex road conditions.
  • the navigation application of the terminal it can also locate the accurate road in combination with the real-time picture around the vehicle, and does not require the user to manually select specific road information, which can achieve correct navigation and eliminate potential safety hazards.
  • FIG. 8 shows a flowchart of a road identification method according to an embodiment of the present application. As shown in Figure 8, the method further includes:
  • Step S80 the first device obtains the hardware parameters of the physical camera through the DMSDP service
  • Step S81 the first device configures the corresponding virtual camera locally according to the hardware parameters of the physical camera
  • Step S82 when the first device detects the road identification request, sends a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service; wherein the control command is used to control the physical camera to shoot the current picture;
  • Step S83 the first device receives the current picture returned by the physical camera in response to the control command through the DMSDP service
  • Step S84 the first device identifies the road where the first device is located according to the current screen and the road information obtained by the local navigation application.
  • the hardware parameters include camera capability parameters of the physical camera
  • step 81 may include: the first device creates a virtual camera hardware abstraction layer HAL and a camera framework corresponding to the physical camera according to the camera capability parameters, and converts the The camera framework described above is added to the distributed camera framework.
  • the first device creates a virtual camera hardware abstraction layer HAL and a camera framework corresponding to the physical camera according to the camera capability parameters, and converts the The camera framework described above is added to the distributed camera framework.
  • steps S82-S84 reference may be made to the description of steps S50-S52, and details are not repeated here.
  • FIG. 9 shows a schematic diagram of an application scenario according to an embodiment of the present application.
  • FIG. 10 shows an interaction diagram between a first device and a second device according to an embodiment of the present application.
  • the road identification method of the present application will be further described with reference to FIG. 9 and FIG. 10 .
  • the first device can communicate with a plurality of second devices, and cameras are installed on the second devices.
  • the first device may be a mobile phone
  • the second device may be a car camera. The user drives the car in the car, navigates through the first device, and accurately identifies the road in combination with the current picture captured by the car camera.
  • the camera programming interface of the first device and the second device can be unified as the CameraKit interface, and the camera interface provided by the DMSDP service of the first device and the second device can be connected, so that the first device and the second device can be connected to the camera interface provided by the DMSDP service. Communication is established between the two devices. Then, as shown in FIG. 10 , the first device sends a configuration request to the second device ( S100 ). After receiving the configuration request, the second device sends the hardware parameters of the physical camera to the first device in the form of a standard interface ( S200 ).
  • the first device may configure a virtual camera corresponding to the physical camera according to the hardware parameters, which specifically includes: the first device creates a virtual camera HAL and a camera frame corresponding to the physical camera according to the camera capability parameters ( S101 ).
  • the first device can control the corresponding physical camera by controlling the virtual camera.
  • the physical camera can be controlled to capture the current picture around the vehicle body, and the first device can control the virtual camera.
  • the navigation application can identify the road where the car body is located by combining the current screen and the road information obtained by the navigation application. As shown in Figure 10, it may specifically include the following steps:
  • the first device When monitoring the road identification request, the first device sends a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service (S102).
  • the second device turns on the camera according to the control command and shoots the current picture (S201).
  • the second device sends the current picture to the first device (S202).
  • the first device After receiving the current picture, the first device identifies the road where the first device is located according to the current picture and the road information obtained by the local navigation application (S103). Specifically, as shown in FIG. 9 , after the vehicle-mounted camera located in the front, rear, left, right and other directions of the vehicle turns on the camera according to the control command and captures the current image, the current image is sent to the first device.
  • the first device receives the current picture in multiple directions, and can stitch the current pictures in multiple directions to obtain a stitched image, and the stitched image can be a panoramic image, and the navigation application of the first device can combine the stitched image and the road information obtained by the navigation application to identify The exact road, for example, the road on the bridge, or the road under the bridge, the number of lanes on the multi-lane, on the main road or on the side road, etc.
  • FIG. 11 shows a schematic diagram of a road recognition result according to an embodiment of the present application. As shown in Figure 11, combining the panoramic image and the road information obtained by navigation, it can be determined that the vehicle is in the left lane.
  • the road recognition method of the present application is based on the distributed camera device virtualization technology, which combines the camera processing capabilities of different devices to form a camera data center for vehicles such as mobile phones and other devices.
  • the camera data stream is analyzed to determine the current road location of the vehicle, thereby improving the user experience of the vehicle owner.
  • FIG. 12 shows a block diagram of a road identification device according to an embodiment of the present application.
  • the road identification apparatus of an embodiment provided in this application can be applied to a first device, where the first device includes two or more virtual cameras HAL.
  • a road identification device according to an embodiment provided by the present application may include:
  • the control module 91 is configured to send a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service when the road identification request is monitored; wherein, the control command is used to control the physical camera to shoot the current picture;
  • a first receiving module 92 configured to receive the current picture returned by the physical camera in response to the control command through the DMSDP service
  • the identification module 93 is configured to identify the road where the first device is located according to the current screen and the road information obtained by the local navigation application.
  • the apparatus further includes:
  • an acquisition module configured to acquire hardware parameters of the physical camera through the DMSDP service
  • the configuration module is configured to locally configure the corresponding virtual camera according to the hardware parameters of the physical camera.
  • the hardware parameters include camera capability parameters of a physical camera
  • the configuration module includes: a configuration unit configured to create a virtual camera hardware abstraction layer corresponding to the physical camera according to the camera capability parameters HAL and camera framework, and adding the camera framework to the distributed camera framework.
  • the DMSDP service includes a command pipeline and a data pipeline
  • the control module 91 includes: a control unit, configured to send a control command to the physical camera corresponding to the virtual camera through the command pipeline of the DMSDP service;
  • the first receiving module 92 includes: a receiving unit, configured to receive the current picture returned by the physical camera in response to the control command through a data pipeline of the DMSDP service.
  • the identification module 93 includes:
  • a splicing unit used for splicing a plurality of the current pictures through the camera HAL of the local camera to obtain a spliced image
  • the identification unit is configured to identify the road where the first device is located according to the stitched image and the road information obtained by the navigation application through the navigation application.
  • the physical camera is a vehicle-mounted camera.
  • FIG. 13 shows a block diagram of an apparatus for acquiring an image according to an embodiment of the present application. As shown in FIG. 13 , the apparatus includes:
  • the second receiving module 94 is configured to receive a control command sent by the first device through the DMSDP service, wherein the control command is used to control the second device to shoot a current picture;
  • the photographing module 95 is used to turn on the camera and photograph the current picture according to the control command
  • the first sending module 96 is configured to send the current picture to the first device.
  • the apparatus further includes:
  • the second sending module is configured to send the hardware parameters of the physical camera of the second device to the first device through the DMSDP service when receiving the configuration request sent by the first device.
  • FIG. 14 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. Taking the terminal device being a mobile phone as an example, FIG. 14 shows a schematic structural diagram of the mobile phone 200 .
  • the mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • a processor 210 an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • the sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors, such as a temperature sensor, a pressure sensor, a distance sensor, and a magnetic sensor. , ambient light sensor, air pressure sensor, bone conduction sensor, etc., not shown in the figure).
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the mobile phone 200 .
  • the mobile phone 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or Neural-network Processing Unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 200 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is cache memory.
  • the memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • the processor 210 can run the road identification method provided by the embodiment of the present application, so that the road can still be navigated correctly under the condition of complex road conditions, and manual selection by the user is not required, which can realize the correct navigation and eliminate potential safety hazards at the same time.
  • the processor 210 may include different devices. For example, when a CPU and a GPU are integrated, the CPU and the GPU may cooperate to execute the road identification method provided by the embodiments of the present application. For example, some algorithms in the road identification method are executed by the CPU, and another part of the algorithms are executed by the GPU. for faster processing efficiency.
  • Display screen 294 is used to display images, videos, and the like.
  • Display screen 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • cell phone 200 may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the display screen 294 may be used to display information entered by or provided to the user as well as various graphical user interfaces (GUIs).
  • GUIs graphical user interfaces
  • display 294 may display photos, videos, web pages, or documents, and the like.
  • display 294 may display a graphical user interface.
  • the GUI includes a status bar, a hideable navigation bar, a time and weather widget, and an application icon, such as a browser icon.
  • the status bar includes operator name (eg China Mobile), mobile network (eg 4G), time and remaining battery.
  • the navigation bar includes a back button icon, a home button icon, and a forward button icon.
  • the status bar may further include a Bluetooth icon, a Wi-Fi icon, an external device icon, and the like.
  • the graphical user interface may further include a Dock bar, and the Dock bar may include commonly used application icons and the like.
  • the display screen 294 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the terminal device can establish a connection with other terminal devices through the antenna 1, the antenna 2 or the USB interface, and transmit data and data according to the road identification method provided by the embodiment of the present application.
  • Control display 294 displays a corresponding graphical user interface.
  • Camera 293 front camera or rear camera, or a camera that can be both a front camera and a rear camera is used to capture still images or video.
  • the camera 293 may include a photosensitive element such as a lens group and an image sensor, wherein the lens group includes a plurality of lenses (convex or concave) for collecting the light signal reflected by the object to be photographed, and transmitting the collected light signal to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes various functional applications and data processing of the mobile phone 200 by executing the instructions stored in the internal memory 221 .
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area may store operating system, code of application programs (such as camera application, WeChat application, etc.), and the like.
  • the storage data area may store data created during the use of the mobile phone 200 (such as images and videos collected by the camera application) and the like.
  • the internal memory 221 may also store one or more computer programs 1310 corresponding to the road identification method provided in this embodiment of the present application.
  • the one or more computer programs 1304 are stored in the aforementioned memory 221 and configured to be executed by the one or more processors 210, and the one or more computer programs 1310 include instructions that may be used to perform the execution of FIG. 5 8.
  • the computer program 1310 may include a control module 91 , a first receiving module 92 and an identification module 93 .
  • the control module 91 is configured to send a control command to the physical camera corresponding to the virtual camera HAL through the DMSDP service when the road identification request is monitored; wherein, the control command is used to control the physical camera to shoot the current picture; the first The receiving module 92 is used to receive the current picture returned by the physical camera in response to the control command through the DMSDP service; the identifying module 93 is used to identify the current picture and the road information obtained by the local navigation application. The road where the first device is located.
  • the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the code of the road identification method provided by the embodiment of the present application may also be stored in an external memory.
  • the processor 210 may execute the code of the road identification method stored in the external memory through the external memory interface 220 .
  • the function of the sensor module 280 is described below.
  • the gyro sensor 280A can be used to determine the movement posture of the mobile phone 200 .
  • the angular velocity of cell phone 200 about three axes ie, x, y, and z axes
  • the gyro sensor 280A can be used to detect the current motion state of the mobile phone 200, such as shaking or still.
  • the gyro sensor 280A can be used to detect a folding or unfolding operation acting on the display screen 294 .
  • the gyroscope sensor 280A may report the detected folding operation or unfolding operation to the processor 210 as an event to determine the folding state or unfolding state of the display screen 294 .
  • the acceleration sensor 280B can detect the magnitude of the acceleration of the mobile phone 200 in various directions (generally three axes). That is, the gyro sensor 280A can be used to detect the current motion state of the mobile phone 200, such as shaking or still. When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 280B can be used to detect a folding or unfolding operation acting on the display screen 294 . The acceleration sensor 280B may report the detected folding operation or unfolding operation to the processor 210 as an event to determine the folding state or unfolding state of the display screen 294 .
  • Proximity light sensor 280G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the mobile phone emits infrared light outward through light-emitting diodes.
  • Phones use photodiodes to detect reflected infrared light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the phone. When insufficient reflected light is detected, the phone can determine that there are no objects near the phone.
  • the proximity light sensor 280G can be arranged on the first screen of the foldable display screen 294, and the proximity light sensor 280G can detect the first screen according to the optical path difference of the infrared signal.
  • the gyroscope sensor 280A (or the acceleration sensor 280B) may send the detected motion state information (such as angular velocity) to the processor 210 .
  • the processor 210 determines, based on the motion state information, whether the current state is the hand-held state or the tripod state (for example, when the angular velocity is not 0, it means that the mobile phone 200 is in the hand-held state).
  • the fingerprint sensor 280H is used to collect fingerprints.
  • the mobile phone 200 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • Touch sensor 280K also called “touch panel”.
  • the touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, also called a "touch screen”.
  • the touch sensor 280K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 294 .
  • the touch sensor 280K may also be disposed on the surface of the mobile phone 200 , which is different from the location where the display screen 294 is located.
  • the display screen 294 of the mobile phone 200 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • Display screen 294 displays an interface of a camera application, such as a viewfinder interface.
  • the wireless communication function of the mobile phone 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in handset 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 251 can provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the mobile phone 200 .
  • the mobile communication module 251 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 251 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 251 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 251 may be provided in the processor 210 .
  • At least part of the functional modules of the mobile communication module 251 may be provided in the same device as at least part of the modules of the processor 210 .
  • the mobile communication module 251 may also be used for information interaction with other terminal devices.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 270A, the receiver 270B, etc.), or displays images or videos through the display screen 294 .
  • the modem processor may be a stand-alone device.
  • the modulation and demodulation processor may be independent of the processor 210, and may be provided in the same device as the mobile communication module 251 or other functional modules.
  • the wireless communication module 252 can provide applications on the mobile phone 200 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module 252 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 252 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210 .
  • the wireless communication module 252 can also receive the signal to be sent from the processor 210 , perform frequency modulation on the signal, amplify the signal, and then convert
  • the mobile phone 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, and an application processor. Such as music playback, recording, etc.
  • the cell phone 200 can receive key 290 input and generate key signal input related to user settings and function control of the cell phone 200 .
  • the mobile phone 200 can use the motor 291 to generate vibration alerts (eg, vibration alerts for incoming calls).
  • the indicator 292 in the mobile phone 200 may be an indicator light, which may be used to indicate a charging state, a change in power, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 295 in the mobile phone 200 is used to connect the SIM card. The SIM card can be contacted and separated from the mobile phone 200 by inserting into the SIM card interface 295 or pulling out from the SIM card interface 295 .
  • the mobile phone 200 may include more or less components than those shown in FIG. 14 , which are not limited in this embodiment of the present application.
  • the illustrated handset 200 is merely an example, and the handset 200 may have more or fewer components than those shown, two or more components may be combined, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the software system of the terminal device can adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of a terminal device.
  • FIG. 15 is a block diagram of a software structure of a terminal device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the telephony manager is used to provide the communication function of the terminal device. For example, the management of call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the terminal device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • An embodiment of the present application provides a road identification device, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above-mentioned FIG. 5 and FIG. 8 when executing the instructions The road identification method shown.
  • An embodiment of the present application provides an apparatus for acquiring an image, comprising: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above method for acquiring an image when executing the instructions .
  • Embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, implement the above-mentioned road identification method or image acquisition method.
  • Embodiments of the present application provide a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above-mentioned road identification method or image acquisition method.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (Electrically Programmable Read-Only-Memory, EPROM or flash memory), static random access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disc Read-Only Memory, CD - ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the foregoing .
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read-only memory
  • EPROM Errically Programmable Read-Only-Memory
  • SRAM static random access memory
  • portable compact disk read-only memory Compact Disc Read-Only Memory
  • CD - ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • memory sticks floppy disks
  • Computer readable program instructions or code described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet).
  • electronic circuits such as programmable logic circuits, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (Programmable Logic Arrays), are personalized by utilizing state information of computer-readable program instructions.
  • Logic Array, PLA the electronic circuit can execute computer readable program instructions to implement various aspects of the present application.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in hardware (eg, circuits or ASICs (Application) that perform the corresponding functions or actions. Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented by a combination of hardware and software, such as firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

一种道路识别方法以及装置,道路识别方法包括:第一设备在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,控制命令用于控制实体相机拍摄当前画面(S50);第一设备通过DMSDP服务接收实体相机响应于控制命令返回的当前画面(S51);第一设备根据当前画面以及本地导航应用获取的道路信息,识别第一设备所在的道路。该道路识别方法,能够结合车辆周围的实时画面定位出准确的道路,在路况比较复杂的情况下仍然可以正确导航,且不需要用户手动选择,能够实现正确导航的同时,消除安全隐患。

Description

道路识别方法以及装置
本申请要求于2020年11月20日提交中国专利局、申请号为202011309744.3、发明名称为“道路识别方法以及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种道路识别方法以及装置。
背景技术
车载全景成像是一种汽车摄像技术,可以将汽车上的多个摄像头拍摄到的画面组合起来,提供多种视图,如俯视图、后视图、全景视图等。图1示出一个示例的车载全景成像的示意图。如图1所示,在车的前、后、左侧、右侧等位置可以安装多个摄像头,分别拍摄多个不同方向的画面,向驾驶员展示汽车外部的视野,为驾驶员提供协助,并向驾驶员警告路径中可能不会立即被看到的障碍物。车载全景成像技术只显示车辆附近的道路环境状况,无法定位。
车载导航通过商业通信卫星,把GPS导航应用到车辆导航上,为汽车驾车人提供导航服务。GPS车载导航系统通过接收GPS卫星信号,可以在市区道路、郊外公路,甚至人迹罕见的沙漠、戈壁、草原等区域为汽车导航,避免司机对道路状况不熟悉带来的麻烦。
在终端硬件上利用卫星进行定位、引导等一系列服务的导航应用,为出行带来极大地便利。目前,导航应用可以提供两地之间路径推荐、路况提示、实时定位等服务。具有全球覆盖高,快速、省时、高效率,应用广泛、多功能,可移动定位等优点。
车载导航和终端上的导航应用在导航时受场景的限定,例如,车辆行驶在高速公路或高架桥下时,或者行驶在路面宽阔却又弯曲的道路上时,在路况比较复杂的情况下,无法准确定位到正确的车道上。或者,需要用户手动选择车道,图2a和图2b分别示出一个示例的用户手动选择车道的示意图,如图2a所示,车辆行进到高架桥附近时,显示器左下角显示了用户可以手动选择是否在桥上的图标(控件),如图2b所示,如果用户选择不在桥上、在桥下,显示器还可以进一步显示用户可选择是否在辅路的控件(图2b右下方)。用户对道路的认知存在一定的主观性,一方面容易选错道路导致开错路、绕路等,另一方面会使用户分心,可能造成安全隐患。
发明内容
有鉴于此,提出了一种道路识别方法以及装置,能够结合车辆周围的实时画面定位出准确的道路,在路况比较复杂的情况下仍然可以正确导航,且不需要用户手动选择,能够实现正确导航的同时,消除安全隐患。
第一方面,本申请的实施例提供了一种道路识别方法,所述方法应用于第一设备,所述第一设备中包括两个以上虚拟相机HAL,所述方法包括:
所述第一设备在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
所述第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画 面;
所述第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
根据第一方面的第一种可能的实现方式中,所述第一设备通过所述DMSDP服务获取所述实体相机的硬件参数;所述第一设备根据所述实体相机的硬件参数在本地配置对应的虚拟相机。
根据第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述硬件参数包括实体相机的相机能力参数,所述第一设备根据所述实体相机的硬件参数在本地配置对应的虚拟相机,包括:所述第一设备根据所述相机能力参数创建所述实体相机对应的虚拟相机硬件抽象层HAL和相机框架,并将所述相机框架加入到分布式相机框架中。
根据第一方面的第三种可能的实现方式中,所述DMSDP服务包括命令管道和数据管道,
所述第一设备通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令,包括:所述第一设备通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令;
所述第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面,包括:所述第一设备通过DMSDP服务的数据管道接收所述实体相机响应于所述控制命令返回的所述当前画面。
根据第一方面的第四种可能的实现方式中,所述第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路,包括:
所述第一设备的本地相机的相机HAL对多个所述当前画面进行拼接,得到拼接图像;
所述第一设备的导航应用根据所述拼接图像和导航应用获取的道路信息,识别所述第一设备所在的道路。
根据第一方面或者第一方面的第一种至第四种可能的实现方式中的任意一种,在第五种可能的实现方式中,所述实体相机为车载相机。
第二方面,本申请的实施例提供了一种获取图像的方法,所述方法包括:
第二设备通过DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
所述第二设备根据所述控制命令打开摄像头并拍摄当前画面;
所述第二设备将所述当前画面发送给所述第一设备。
根据第二方面的第一种可能的实现方式中,所述方法还包括:
在接收到第一设备发送的配置请求时,所述第二设备通过所述DMSDP服务向所述第一设备发送所述第二设备的实体相机的硬件参数。
第三方面,本申请的实施例提供了一种道路识别装置,所述装置应用于第一设备,所述第一设备中包括两个以上虚拟相机HAL,所述装置包括:
控制模块,用于在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
第一接收模块,用于通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
识别模块,用于根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
根据第三方面的第一种可能的实现方式中,所述装置还包括:
获取模块,用于通过所述DMSDP服务获取所述实体相机的硬件参数;
配置模块,用于根据所述实体相机的硬件参数在本地配置对应的虚拟相机。
根据第三方面的第一种可能的实现方式,在第二种可能的实现方式中,所述硬件参数包括实体相机的相机能力参数,
所述配置模块包括:
配置单元,用于根据所述相机能力参数创建所述实体相机对应的虚拟相机硬件抽象层HAL和相机框架,并将所述相机框架加入到分布式相机框架中。
根据第三方面的第三种可能的实现方式中,所述DMSDP服务包括命令管道和数据管道,
所述控制模块包括:控制单元,用于通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令;
所述第一接收模块包括:接收单元,用于通过DMSDP服务的数据管道接收所述实体相机响应于所述控制命令返回的所述当前画面。
根据第三方面的第四种可能的实现方式中,所述识别模块包括:
拼接单元,用于通过本地相机的相机HAL对多个所述当前画面进行拼接,得到拼接图像;
识别单元,用于通过导航应用根据所述拼接图像和导航应用获取的道路信息,识别所述第一设备所在的道路。
根据第三方面或者第三方面的第一种至第四种可能的实现方式中的任意一种,在第五种可能的实现方式中,所述实体相机为车载相机。
第四方面,本申请的实施例提供了一种获取图像的装置,所述装置包括:
第二接收模块,用于通过DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
拍摄模块,用于根据所述控制命令打开摄像头并拍摄当前画面;
第一发送模块,用于将所述当前画面发送给所述第一设备。
根据第四方面的第一种可能的实现方式中,所述装置还包括:
第二发送模块,用于在接收到第一设备发送的配置请求时,通过所述DMSDP服务向所述第一设备发送所述第二设备的实体相机的硬件参数。
第五方面,本申请的实施例提供了一种数据的传输装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述指令时实现第一方面的任意一种实现方式所述的方法,或者实现第二方面任意一种实现方式所述的方法。
第六方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现第一方面的任意一种实现方式所述的方法,或者实现第二方面任意一种实现方式所述的方法。
第七方面,本申请的实施例提供了一种终端设备,该终端设备可以执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的道路识别方法。
第八方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中 运行时,所述电子设备中的处理器执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的道路识别方法。
第九方面,本申请的实施例提供了一种终端设备,该终端设备可以执行上述第二方面或者第二方面的多种可能的实现方式中的一种或几种的图像获取方法。
第九方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第二方面或者第二方面的多种可能的实现方式中的一种或几种的图像获取方法。
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。
图1示出一个示例的车载全景成像的示意图。
图2a和图2b分别示出一个示例的用户手动选择车道的示意图。
图3示出根据本申请一实施例的道路识别方法的应用场景示意图。
图4示出根据本申请一实施例的应用场景中设备配置的框图。
图5示出根据本申请一实施例的道路识别方法的流程图。
图6a和图6b分别示出根据本申请一示例的监测到道路识别请求的场景的示意图。
图7示出根据本申请一实施例的获取图像的方法的流程图。
图8示出根据本申请一实施例的道路识别方法的流程图。
图9示出根据本申请一实施例的应用场景的示意图。
图10示出根据本申请一实施例的第一设备和第二设备之间的交互图。
图11示出根据本申请一实施例的道路识别结果的示意图。
图12示出根据本申请一实施例的道路识别装置的框图。
图13示出根据本申请一实施例的获取图像的装置的框图。
图14示出根据本申请一实施例的终端设备的结构示意图。
图15示出根据本申请一实施例的终端设备的软件结构框图。
具体实施方式
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
相关技术中,通过手机拍摄周围的画面,根据拍摄的画面和导航应用的获取的道路信息,定位准确的道路。但这种方式不适用于车辆行驶的过程,拍摄的多张画面可能是不同地点的, 不利于准确定位。
为了解决上述技术问题,本申请提供了一种道路识别方法,本申请实施例的道路识别方法能够实现手机(或者其他终端设备,比如平板、智能手表等)和多个车载相机之间的通信,通过手机控制车载相机拍摄车辆周围的当前画面,并将当前画面回传到手机,手机综合当前画面和导航应用获取的道路信息,确定准确的道路信息。本申请的实施方式的道路识别方法,相比于现有技术中的车载导航,能够结合车辆周围的实时画面定位出准确的道路,在路况比较复杂的情况下仍然可以正确导航。相比于终端的导航应用,同样可以能够结合车辆周围的实时画面定位出准确的道路,且不需要用户手动选择,能够实现正确导航的同时,消除安全隐患。
图3示出根据本申请一实施例的道路识别方法的应用场景示意图。如图3所示,应用场景中可以包括第一设备,第一设备可以为手机、Pad、智能手表等终端设备。应用场景中还可以包括一个或两个以上第二设备,第二设备可以为包括摄像头的车载相机、手机、Pad、智能手表等。在一个应用场景中,第一设备可以为手机、第二设备可以为车载相机。
第一设备和第二设备之间建立了通信连接,第一设备和第二设备之间可以直接传输数据。
在一种可能的实现方式中,可以通过DMSDP(英文全称:distribute mobile sensing development platform;中文全称:分布式融合感知平台)服务在第一设备和第二设备之间建立通信。具体地,可以将第一设备和第二设备的DMSDP服务提供的相机接口进行对接,从而建立跨设备双景模式的场景,第一设备和第二设备之间可以通过DMSDP服务传输命令和数据。其中,DMSDP服务提供的相机接口可以为CameraKit接口,通过统一第一设备和第二设备的相机编程接口为CameraKit接口,可以实现相机接口对接。
在建立DMSDP通信后,第一设备和第二设备之间可以按照DMSDP协议进行通信。DMSDP服务可以包括命令管道和数据管道。第一设备可以通过DMSDP服务向第二设备发送配置请求,所述配置请求用于获取第二设备的实体相机的硬件参数。具体地,第一设备可以通过DMSDP服务的命令管道向第二设备发送配置请求。
第二设备接收到上述配置请求,响应于配置请求,第二设备可以通过DMSDP服务,将第二设备的实体相机的硬件参数以标准接口的形式发送给第一设备,比如说,可以通过DMSDP服务的数据管道发送实体相机的硬件参数。第一设备通过DMSDP服务接收到第二设备的实体相机的硬件参数,通过读取标准接口获取第二设备的实体相机的硬件参数,并根据第二设备的实体相机的硬件参数在本地配置对应的虚拟相机。具体地,硬件参数可以包括实体相机的相机能力参数,第一设备可以根据相机能力参数创建实体相机对应的虚拟相机硬件抽象层HAL(Hardware Abstraction Layer)和相机框架,并将所述相机框架加入到分布式相机框架中。
其中,虚拟相机HAL是对第二设备的实体相机的硬件抽象层。HAL是位于操作系统内核与硬件电路之间的接口层,以模块的方式来管理各个硬件访问的接口,每一个硬件模块都对应一个动态链接库文件。因此,相机HAL是相机的访问接口,通过访问虚拟相机HAL,并将访问通过DMSDP服务发给虚拟相机HAL对应的实体相机,可以实现对相应的实体相机的访问。
相机能力参数可以包括相机的光圈参数、焦距参数等,还可以包括高动态范围HDR(High Dynamic Range)能力、人像能力、超级夜景能力等。根据实体相机的硬件参数创建虚拟相机,可以全量同步实体相机的元数据,保证对本端虚拟相机的控制和对实体相机的远程控制完全相同。
第一设备可以通过控制虚拟相机HAL,控制虚拟相机对应第二设备的实体相机。比如说,第一设备可以通过在本地打开虚拟相机HAL,打开对应的第二设备的实体相机进行拍摄,还可以通过在本地虚拟相机HAL,关闭对应的第二设备的实体相机。
在一种可能的实现方式中,第一设备本地可以包括一个或者两个以上虚拟相机HAL,一个虚拟相机HAL对应一个第二设备的实体相机。举例来说,假设车上搭载了两个以上实体相机,则第一设备可以将两个以上实体相机虚拟到本地,创建对应的虚拟相机HAL。或者,第二设备也可以是手机,第一设备上的一个虚拟相机HAL以第二设备上的多个实体相机(前置相机、后置相机)对应。
在一种可能的实现方式中,本申请不限制第一设备和第二设备的操作系统(Operating System,简称OS)的类型。第一设备可以采用安卓OS、鸿蒙OS、Linux OS等操作系统,第二设备也可以采用安卓OS、鸿蒙OS、Linux OS、Lite OS等操作系统。
图4示出根据本申请一实施例的应用场景中设备配置的框图。如图4所示,第一设备可以与多个第二设备通过DMSDP服务通信,第一设备上可以配置有多个虚拟相机HAL,一个虚拟相机HAL与一个第二设备的实体相机相对应,第二设备上可以包括一个或多个实体相机。
在一种可能的实现方式中,如图4所示,第一设备上还可以配置有分布式相机框架,第一设备在接收到第二设备的实体相机的硬件参数时,还可以创建对应的相机框架,并将相机框架加入到分布式相机框架中。分布式相机框架包括多个虚拟相机HAL对应的相机框架、以及控制模块,控制模块可以连接多个相机框架,负责动态控制流水线、设备协同管理等。
在第一设备和第二设备之间建立DMSDP服务,在第一设备本地建立第二设备的实体相机的虚拟相机HAL,模拟真实ISP(Internet Service Provider,互联网服务提供商)控制,实现相机命令和数据跨设备。根据本申请上述实施方式的配置方式,可以实现对第二设备的实体相机的快速控制,并获取第二设备的实体相机返回的第一设备周围的当前画面,结合当前画面和第一设备的导航应用获得的道路信息准确识别第一设备所在的准确道路。
图5示出根据本申请一实施例的道路识别方法的流程图,如图5所示,本申请的道路识别方法可以包括以下步骤:
步骤S50,第一设备在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
步骤S51,第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
步骤S52,第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
其中,道路识别请求可以是在多种情况下触发的。在一种可能的实现方式中,第一设备的导航应用运行时,可以在显示界面上显示道路识别控件,在检测到触发道路识别空间的操作时,可以生成道路识别请求,第一设备可以监测到道路识别请求。图6a和图6b分别示出根据本申请一示例的监测到道路识别请求的场景的示意图。如图6a所示,在打开的导航应用的显示界面上,可以显示有道路识别控件,在将要进入立交桥时,由于道路情况比较复杂,用户可以手动点击道路识别空间,触发道路识别,第一设备在监测到用户的触发操作时,可以生成道路识别请求。
上述实施方式的触发方法不需要用户选择具体的道路信息,可以排除错误的主观性认识 导致的导航错误,能够实现正确导航,一定程度上可以减轻用户选择道路分心带来的安全隐患。
在另一种可能的实现方式中,第一设备可以通过导航应用预设需要生成道路识别请求的道路条件,这样,第一设备在监测到当前所在的道路满足道路条件时,生成道路识别请求,从而监测到道路识别请求。其中,道路条件可以是表示需要生成道路识别请求的道路场景,道路场景可以是指将要行驶进入立交桥时、行驶在包括主路和辅路的道路时、多条道路环路等场景,如图6b所示,位于立交桥并且道路包括主路、辅路多条道路,道路情况比较复杂。第一设备可以通过定位得到的位置信息和导航应用获得的道路信息得到设备所处的道路场景,进而根据道路场景确定是否需要生成道路识别请求,如果第一设备当前所处的道路场景与预设的需要生成道路识别请求的道路场景一致,也就是满足预设的道路条件,此时,第一设备可以生成道路识别请求。举例来说,如图6b所示,在车辆行驶到将要进入立交桥时,第一设备可以自动生成道路识别请求,比如说,第一设备上可以在导航应用上预先设置正在朝立交桥的方向行驶、且与立交桥之间的距离小于距离阈值时,生成道路识别请求。
上述实施方式的触发方法,不需要用户进行任何操作即可实现正确导航,消除安全隐患。
在一种可能的实现方式中,如图4所示,第一设备可以通过分布式相机框架向多个虚拟相机HAL下发控制命令,通过控制多个虚拟相机HAL控制虚拟相机HAL对应的实体相机。控制命令可以包括打开、拍摄、关闭相机等过程,第一设备的虚拟相机HAL在接收到控制命令时,可以根据控制命令向虚拟相机HAL对应的实体相机发送子控制命令,比如说,打开命令、拍摄命令等等。
如图4所示,所述DMSDP服务可以包括命令管道和数据管道,命令管道用于传输控制命令,数据管道用于传输图像数据。因此,第一设备可以通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令。
第二设备的实体相机通过DMSDP服务接收控制命令,根据控制命令控制本地的实体相机。图7示出根据本申请一实施例的获取图像的方法的流程图。如图7所示,可以包括以下步骤:
步骤S70,第二设备通过DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
步骤S71,所述第二设备根据所述控制命令打开摄像头并拍摄当前画面;
步骤S72,所述第二设备将所述当前画面发送给所述第一设备。
第二设备通过DMSDP服务器接收到控制命令后,可以根据控制命令调用相机API(Application Programming Interface,应用程序接口)接口,然后通过相机框架向相机HAL下发控制命令,通过相机HAL控制实体相机的打开、拍摄、关闭等操作。
在拍摄完当前画面后,第二设备可以通过DMSDP服务的数据管道向第一设备发送当前画面。因此,第一设备可以通过DMSDP服务的数据管道接收实体相机响应于控制命令返回的当前画面。
由于包括多个虚拟相机HAL,也就是第一设备可以从多个实体相机获取当前画面。在接收到多个当前画面后,第一设备根据多个当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
在一种可能的实现方式中,第一设备的虚拟相机HAL接收到对应的实体相机返回的当前画面后,可以将当前画面发送给本地相机的相机HAL,由本地相机的相机HAL采用相机算法 对多个当前画面进行拼接得到拼接图像。由于本地的虚拟相机HAL只是存储了对应的实体相机的相机能力,是不包括相机算法的,因此,第一设备的虚拟相机HAL在接收到当前画面后可以发送给本地相机的相机HAL,由本地相机的相机HAL采用相机算法对当前画面进行拼接。
本地相机的相机HAL在得到拼接图像后,可以传输给本地的导航应用,由第一设备的本地的导航应用结合拼接图像和导航应用获取的道路信息识别第一设备所在的具体的道路。其中,导航应用可以从云端服务器获取道路信息。
本申请实施例的道路识别方法能够实现手机和多个车载相机之间的通信,通过手机控制车载相机拍摄车辆周围的当前画面,并将当前画面回传到手机,手机综合当前画面和导航应用获取的道路信息,确定准确的道路信息。本申请的实施方式的道路识别方法,相比于现有技术中的车载导航,能够结合车辆周围的实时画面定位出准确的道路,在路况比较复杂的情况下仍然可以正确导航。相比于终端的导航应用,同样可以能够结合车辆周围的实时画面定位出准确的道路,且不需要用户手动选择具体道路信息,能够实现正确导航的同时,消除安全隐患。
图8示出根据本申请一实施例的道路识别方法的流程图。如图8所示,所述方法还包括:
步骤S80,所述第一设备通过所述DMSDP服务获取所述实体相机的硬件参数;
步骤S81,所述第一设备根据所述实体相机的硬件参数在本地配置对应的虚拟相机;
步骤S82,第一设备在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
步骤S83,第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
步骤S84,第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
其中,所述硬件参数包括实体相机的相机能力参数,步骤81可以包括:所述第一设备根据所述相机能力参数创建所述实体相机对应的虚拟相机硬件抽象层HAL和相机框架,并将所述相机框架加入到分布式相机框架中。具体的过程,可以参见上文中关于配置部分的具体描述,不再赘述。
步骤S82-S84可以参见步骤S50-S52部分的描述,不再赘述。
应用示例
图9示出根据本申请一实施例的应用场景的示意图。图10示出根据本申请一实施例的第一设备和第二设备之间的交互图。结合图9和图10进一步对本申请的道路识别方法进行说明。
如图9所示,第一设备可以与多个第二设备通信,第二设备上安装有摄像头。在一种可能的应用场景中,第一设备可以为手机,第二设备可以为车载相机,用户在车内驾驶汽车,通过第一设备进行导航,结合车载相机拍摄的当前画面准确识别道路。
在开启道路识别前,可以通过统一第一设备和第二设备的相机编程接口为CameraKit接口,可以实现将第一设备和第二设备的DMSDP服务提供的相机接口对接,从而在第一设备和第二设备之间建立通信。之后,如图10所示,第一设备向第二设备发送配置请求(S100),第二设备接收到配置请求后,以标准接口的形式向第一设备发送实体相机的硬件参数(S200)。第一设备接收到硬件参数后,可以根据硬件参数配置实体相机对应的虚拟相机,具体包括:第一设备根据相机能力参数创建实体相机对应的虚拟相机HAL和相机框架(S101)。
在创建完实体相机对应的虚拟相机后,第一设备可以通过控制虚拟相机,实现对相应的实体相机的控制,在导航的过程中,可以控制实体相机拍摄车体周围的当前画面,第一设备的导航应用可以结合当前画面和导航应用获取的道路信息,识别车体所在的道路。如图10所示,具体可以包括以下步骤:
第一设备在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令(S102)。第二设备根据控制命令打开摄像头并拍摄当前画面(S201)。第二设备将当前画面发送给所述第一设备(S202)。
第一设备接收到当前画面后,根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路(S103)。具体地,如图9所示,位于车辆的前、后、左、右等多个方向的车载相机根据控制命令打开摄像头并拍摄当前画面后,将当前画面发送给第一设备。第一设备接收到多个方向的当前画面,可以将多个方向的当前画面拼接得到拼接图像,拼接图像可以是全景图,第一设备的导航应用可以结合拼接图像和导航应用获取的道路信息识别准确的道路,比如说,位于立交桥的桥上道路、或者桥下道路,位于多车道的第几个车道,位于主路还是辅路,等等。
图11示出根据本申请一实施例的道路识别结果的示意图。如图11所示,结合全景图以及导航获取的道路信息,可以确定车辆位于左侧的车道。
本申请的道路识别方法基于分布式相机设备虚拟化技术,将不同设备的相机处理能力结合起来,形成车载-手机等设备的相机数据中心,并根据导航软件获取的道路信息,将这些设备获取的相机数据流进行分析,判断出当前车辆所在道路位置,从而提升车主的用户体验。
本申请还提供了一种道路识别装置,图12示出根据本申请一实施例的道路识别装置的框图。本申请提供的一实施方式的道路识别装置可以应用于第一设备,所述第一设备中包括两个以上虚拟相机HAL。如图12所示,本申请提供的一实施方式的道路识别装置可以包括:
控制模块91,用于在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
第一接收模块92,用于通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
识别模块93,用于根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
在一种可能的实现方式中,所述装置还包括:
获取模块,用于通过所述DMSDP服务获取所述实体相机的硬件参数;
配置模块,用于根据所述实体相机的硬件参数在本地配置对应的虚拟相机。
在一种可能的实现方式中,所述硬件参数包括实体相机的相机能力参数,所述配置模块包括:配置单元,用于根据所述相机能力参数创建所述实体相机对应的虚拟相机硬件抽象层HAL和相机框架,并将所述相机框架加入到分布式相机框架中。
在一种可能的实现方式中,所述DMSDP服务包括命令管道和数据管道,
所述控制模块91包括:控制单元,用于通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令;
所述第一接收模块92包括:接收单元,用于通过DMSDP服务的数据管道接收所述实体相机响应于所述控制命令返回的所述当前画面。
在一种可能的实现方式中,所述识别模块93包括:
拼接单元,用于通过本地相机的相机HAL对多个所述当前画面进行拼接,得到拼接图像;
识别单元,用于通过导航应用根据所述拼接图像和导航应用获取的道路信息,识别所述第一设备所在的道路。
在一种可能的实现方式中,所述实体相机为车载相机。
本申请还提供了一种获取图像的装置,该获取图像的装置可以应用于第二设备。图13示出根据本申请一实施例的获取图像的装置的框图,如图13所示所述装置包括:
第二接收模块94,用于通过DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
拍摄模块95,用于根据所述控制命令打开摄像头并拍摄当前画面;
第一发送模块96,用于将所述当前画面发送给所述第一设备。
在一种可能的实现方式中,所述装置还包括:
第二发送模块,用于在接收到第一设备发送的配置请求时,通过所述DMSDP服务向所述第一设备发送所述第二设备的实体相机的硬件参数。
图14示出根据本申请一实施例的终端设备的结构示意图。以终端设备是手机为例,图14示出了手机200的结构示意图。
手机200可以包括处理器210,外部存储器接口220,内部存储器221,USB接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块251,无线通信模块252,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及SIM卡接口295等。其中传感器模块280可以包括陀螺仪传感器280A,加速度传感器280B,接近光传感器280G、指纹传感器280H,触摸传感器280K(当然,手机200还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。
可以理解的是,本申请实施例示意的结构并不构成对手机200的具体限定。在本申请另一些实施例中,手机200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
处理器210可以运行本申请实施例提供的道路识别方法,以便于在路况比较复杂的情况 下仍然可以正确导航,且不需要用户手动选择,能够实现正确导航的同时,消除安全隐患。处理器210可以包括不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的道路识别方法,比如道路识别方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机200可以包括1个或N个显示屏294,N为大于1的正整数。显示屏294可用于显示由用户输入的信息或提供给用户的信息以及各种图形用户界面(graphical user interface,GUI)。例如,显示器294可以显示照片、视频、网页、或者文件等。再例如,显示器294可以显示图形用户界面。其中,图形用户界面上包括状态栏、可隐藏的导航栏、时间和天气小组件(widget)、以及应用的图标,例如浏览器图标等。状态栏中包括运营商名称(例如中国移动)、移动网络(例如4G)、时间和剩余电量。导航栏中包括后退(back)键图标、主屏幕(home)键图标和前进键图标。此外,可以理解的是,在一些实施例中,状态栏中还可以包括蓝牙图标、Wi-Fi图标、外接设备图标等。还可以理解的是,在另一些实施例中,图形用户界面中还可以包括Dock栏,Dock栏中可以包括常用的应用图标等。当处理器210检测到用户的手指(或触控笔等)针对某一应用图标的触摸事件后,响应于该触摸事件,打开与该应用图标对应的应用的用户界面,并在显示器294上显示该应用的用户界面。
在本申请实施例中,显示屏294可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。
当处理器210运行本申请实施例提供的道路识别方法后,终端设备可以通过天线1、天线2或者USB接口与其他的终端设备建立连接,并根据本申请实施例提供的道路识别方法传输数据以及控制显示屏294显示相应的图形用户界面。
摄像头293(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头293可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行手机200的各种功能应用以及数据处理。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储手机200使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。
内部存储器221还可以存储本申请实施例提供的道路识别方法对应的一个或多个计算机程序1310。该一个或多个计算机程序1304被存储在上述存储器221中并被配置为被该一个或多个处理器210执行,该一个或多个计算机程序1310包括指令,上述指令可以用于执行如图5、图8相应实施例中的各个步骤,该计算机程序1310可以包括控制模块91、第一接收模 块92以及识别模块93。其中,控制模块91,用于在监测到道路识别请求时,通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;第一接收模块92,用于通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;识别模块93,用于根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的道路识别方法的代码还可以存储在外部存储器中。这种情况下,处理器210可以通过外部存储器接口220运行存储在外部存储器中的道路识别方法的代码。
下面介绍传感器模块280的功能。
陀螺仪传感器280A,可以用于确定手机200的运动姿态。在一些实施例中,可以通过陀螺仪传感器280A确定手机200围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器280A可以用于检测手机200当前的运动状态,比如抖动还是静止。
当本申请实施例中的显示屏为可折叠屏时,陀螺仪传感器280A可用于检测作用于显示屏294上的折叠或者展开操作。陀螺仪传感器280A可以将检测到的折叠操作或者展开操作作为事件上报给处理器210,以确定显示屏294的折叠状态或展开状态。
加速度传感器280B可检测手机200在各个方向上(一般为三轴)加速度的大小。即陀螺仪传感器280A可以用于检测手机200当前的运动状态,比如抖动还是静止。当本申请实施例中的显示屏为可折叠屏时,加速度传感器280B可用于检测作用于显示屏294上的折叠或者展开操作。加速度传感器280B可以将检测到的折叠操作或者展开操作作为事件上报给处理器210,以确定显示屏294的折叠状态或展开状态。
接近光传感器280G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机通过发光二极管向外发射红外光。手机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机附近有物体。当检测到不充分的反射光时,手机可以确定手机附近没有物体。当本申请实施例中的显示屏为可折叠屏时,接近光传感器280G可以设置在可折叠的显示屏294的第一屏上,接近光传感器280G可根据红外信号的光程差来检测第一屏与第二屏的折叠角度或者展开角度的大小。
陀螺仪传感器280A(或加速度传感器280B)可以将检测到的运动状态信息(比如角速度)发送给处理器210。处理器210基于运动状态信息确定当前是手持状态还是脚架状态(比如,角速度不为0时,说明手机200处于手持状态)。
指纹传感器280H用于采集指纹。手机200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器280K,也称“触控面板”。触摸传感器280K可以设置于显示屏294,由触摸传感器280K与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏294提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器280K也可以设置于手机200的表面,与显示屏294所处的位置不同。
示例性的,手机200的显示屏294显示主界面,主界面中包括多个应用(比如相机应用、 微信应用等)的图标。用户通过触摸传感器280K点击主界面中相机应用的图标,触发处理器210启动相机应用,打开摄像头293。显示屏294显示相机应用的界面,例如取景界面。
手机200的无线通信功能可以通过天线1,天线2,移动通信模块251,无线通信模块252,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块251可以提供应用在手机200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块251可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块251可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块251还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块251的至少部分功能模块可以被设置于处理器210中。在一些实施例中,移动通信模块251的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。在本申请实施例中,移动通信模块251还可以用于与其它终端设备进行信息交互。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器270A,受话器270B等)输出声音信号,或通过显示屏294显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器210,与移动通信模块251或其他功能模块设置在同一个器件中。
无线通信模块252可以提供应用在手机200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块252可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块252经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块252还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
另外,手机200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机200可以接收按键290输入,产生与手机200的用户设置以及功能控制有关的键信号输入。手机200可以利用马达291产生振动提示(比如来电振动提示)。手机200中的指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。手机200中的SIM卡接口295用于连接SIM卡。SIM卡可以通过插入SIM卡接口295,或从SIM卡接口295拔出,实现和手机200的接触和分离。
应理解,在实际应用中,手机200可以包括比图14所示的更多或更少的部件,本申请实施例不作限定。图示手机200仅是一个范例,并且手机200可以具有比图中所示出的更多的 或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
终端设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明终端设备的软件结构。
图15是本申请实施例的终端设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图15所示,应用程序包可以包括电话、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图15所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供终端设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融 合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
本申请的实施例提供了一种道路识别装置,包括:处理器以及用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述图5和图8所示的道路识别方法。
本申请的实施例提供了一种获取图像的装置,包括:处理器以及用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述获取图像的方法。
本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述道路识别方法或者获取图像的方法。
本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述道路识别方法或者获取图像的方法。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意 种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技 术人员能理解本文披露的各实施例。

Claims (18)

  1. 一种道路识别方法,其特征在于,所述方法应用于第一设备,所述第一设备中包括两个以上虚拟相机硬件抽象层HAL,所述方法包括:
    所述第一设备在监测到道路识别请求时,通过分布式融合感知平台DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
    所述第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
    所述第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一设备通过所述DMSDP服务获取所述实体相机的硬件参数;
    所述第一设备根据所述实体相机的硬件参数在本地配置对应的虚拟相机。
  3. 根据权利要求2所述的方法,其特征在于,所述硬件参数包括实体相机的相机能力参数,
    所述第一设备根据所述实体相机的硬件参数在本地配置对应的虚拟相机,包括:
    所述第一设备根据所述相机能力参数创建所述实体相机对应的虚拟相机HAL和相机框架,并将所述相机框架加入到分布式相机框架中。
  4. 根据权利要求1所述的方法,其特征在于,所述DMSDP服务包括命令管道和数据管道,
    所述第一设备通过DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令,包括:所述第一设备通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令;
    所述第一设备通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面,包括:所述第一设备通过DMSDP服务的数据管道接收所述实体相机响应于所述控制命令返回的所述当前画面。
  5. 根据权利要求1所述的方法,其特征在于,所述第一设备根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路,包括:
    所述第一设备的本地相机的相机HAL对多个所述当前画面进行拼接,得到拼接图像;
    所述第一设备的导航应用根据所述拼接图像和导航应用获取的道路信息,识别所述第一 设备所在的道路。
  6. 根据权利要求1-5任意一项所述的方法,其特征在于,所述实体相机为车载相机。
  7. 一种获取图像的方法,其特征在于,所述方法包括:
    第二设备通过分布式融合感知平台DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
    所述第二设备根据所述控制命令打开摄像头并拍摄当前画面;
    所述第二设备将所述当前画面发送给所述第一设备。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    在接收到第一设备发送的配置请求时,所述第二设备通过所述DMSDP服务向所述第一设备发送所述第二设备的实体相机的硬件参数。
  9. 一种道路识别装置,其特征在于,所述装置应用于第一设备,所述第一设备中包括两个以上虚拟相机硬件抽象层HAL,所述装置包括:
    控制模块,用于在监测到道路识别请求时,通过分布式融合感知平台DMSDP服务向虚拟相机HAL对应的实体相机发送控制命令;其中,所述控制命令用于控制所述实体相机拍摄当前画面;
    第一接收模块,用于通过DMSDP服务接收所述实体相机响应于所述控制命令返回的所述当前画面;
    识别模块,用于根据所述当前画面以及本地导航应用获取的道路信息,识别所述第一设备所在的道路。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    获取模块,用于通过所述DMSDP服务获取所述实体相机的硬件参数;
    配置模块,用于根据所述实体相机的硬件参数在本地配置对应的虚拟相机。
  11. 根据权利要求10所述的装置,其特征在于,所述硬件参数包括实体相机的相机能力参数,
    所述配置模块包括:
    配置单元,用于根据所述相机能力参数创建所述实体相机对应的虚拟相机HAL和相机框架,并将所述相机框架加入到分布式相机框架中。
  12. 根据权利要求9所述的装置,其特征在于,所述DMSDP服务包括命令管道和数据管道,
    所述控制模块包括:控制单元,用于通过DMSDP服务的命令管道向虚拟相机对应的实体相机发送控制命令;
    所述第一接收模块包括:接收单元,用于通过DMSDP服务的数据管道接收所述实体相机响应于所述控制命令返回的所述当前画面。
  13. 根据权利要求9所述的装置,其特征在于,所述识别模块包括:
    拼接单元,用于通过本地相机的相机HAL对多个所述当前画面进行拼接,得到拼接图像;
    识别单元,用于通过导航应用根据所述拼接图像和导航应用获取的道路信息,识别所述第一设备所在的道路。
  14. 根据权利要求9-13任意一项所述的装置,其特征在于,所述实体相机为车载相机。
  15. 一种获取图像的装置,其特征在于,所述装置包括:
    第二接收模块,用于通过分布式融合感知平台DMSDP服务接收第一设备发送的控制命令,其中,所述控制命令用于控制所述第二设备拍摄当前画面;
    拍摄模块,用于根据所述控制命令打开摄像头并拍摄当前画面;
    第一发送模块,用于将所述当前画面发送给所述第一设备。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:
    第二发送模块,用于在接收到第一设备发送的配置请求时,通过所述DMSDP服务向所述第一设备发送所述第二设备的实体相机的硬件参数。
  17. 一种数据的传输装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令时实现权利要求1-6任意一项所述的方法,或者实现权利要求7-8任意一项所述的方法。
  18. 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-6中任意一项所述的方法,或者,实现权利要求7-8任意一项所述的方法。
PCT/CN2021/130988 2020-11-20 2021-11-16 道路识别方法以及装置 WO2022105758A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/253,700 US20240125603A1 (en) 2020-11-20 2021-11-16 Road Recognition Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011309744.3A CN114519935B (zh) 2020-11-20 2020-11-20 道路识别方法以及装置
CN202011309744.3 2020-11-20

Publications (1)

Publication Number Publication Date
WO2022105758A1 true WO2022105758A1 (zh) 2022-05-27

Family

ID=81594787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130988 WO2022105758A1 (zh) 2020-11-20 2021-11-16 道路识别方法以及装置

Country Status (3)

Country Link
US (1) US20240125603A1 (zh)
CN (1) CN114519935B (zh)
WO (1) WO2022105758A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135448A (zh) * 2023-02-24 2023-11-28 荣耀终端有限公司 拍摄的方法和电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954292A (zh) * 2012-05-30 2014-07-30 常州市新科汽车电子有限公司 导航仪根据道路车线对道路主辅路的匹配方法
CN105338249A (zh) * 2015-11-24 2016-02-17 努比亚技术有限公司 基于独立相机系统进行拍摄方法及移动终端
CN107659768A (zh) * 2017-08-08 2018-02-02 珠海全志科技股份有限公司 一种基于Android多应用共享摄像头的系统及方法
JP2019057767A (ja) * 2017-09-20 2019-04-11 株式会社デンソー 携帯端末、遠隔操作方法
CN110617826A (zh) * 2019-09-29 2019-12-27 百度在线网络技术(北京)有限公司 车辆导航中高架桥区识别方法、装置、设备和存储介质
CN111193870A (zh) * 2020-01-09 2020-05-22 华为终端有限公司 通过移动设备控制车载摄像头的方法、设备和系统
CN111405178A (zh) * 2020-03-09 2020-07-10 Oppo广东移动通信有限公司 基于Camera2的拍照方法、装置、存储介质及移动设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPS230802A0 (en) * 2002-05-14 2002-06-13 Roberts, David Grant Aircraft based visual aid
US20080141180A1 (en) * 2005-04-07 2008-06-12 Iofy Corporation Apparatus and Method for Utilizing an Information Unit to Provide Navigation Features on a Device
JP4787196B2 (ja) * 2007-03-26 2011-10-05 アルパイン株式会社 車載用ナビゲーション装置
US9200919B2 (en) * 2012-06-05 2015-12-01 Apple Inc. Method, system and apparatus for selectively obtaining map image data according to virtual camera velocity
US8831780B2 (en) * 2012-07-05 2014-09-09 Stanislav Zelivinski System and method for creating virtual presence
US9798322B2 (en) * 2014-06-19 2017-10-24 Skydio, Inc. Virtual camera interface and other user interaction paradigms for a flying digital assistant
CN104990555B (zh) * 2015-02-17 2018-07-03 上海安吉四维信息技术有限公司 实景导航系统的工作方法
US10739157B2 (en) * 2016-06-12 2020-08-11 Apple Inc. Grouping maneuvers for display in a navigation presentation
KR20180072139A (ko) * 2016-12-21 2018-06-29 현대자동차주식회사 차량 및 그 제어 방법
CN107167147A (zh) * 2017-05-02 2017-09-15 深圳市元征科技股份有限公司 基于窄带物联网的导航方法、眼镜及可读存储介质
ES2704373B2 (es) * 2017-09-15 2020-05-29 Seat Sa Método y sistema para mostrar información de realidad virtual en un vehículo
CN109040968B (zh) * 2018-06-26 2020-06-16 努比亚技术有限公司 路况提醒方法、移动终端及计算机可读存储介质
CN109141464B (zh) * 2018-09-30 2020-12-29 百度在线网络技术(北京)有限公司 导航变道提示方法和装置
CN110164164B (zh) * 2019-04-03 2022-11-25 浙江工业大学之江学院 利用摄像头拍摄功能增强手机导航软件识别复杂道路精准度的方法
CN110882543B (zh) * 2019-11-26 2022-05-17 腾讯科技(深圳)有限公司 控制虚拟环境中虚拟对象下落的方法、装置及终端
CN116320782B (zh) * 2019-12-18 2024-03-26 荣耀终端有限公司 一种控制方法、电子设备、计算机可读存储介质、芯片
CN111179436A (zh) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 一种基于高精度定位技术的混合现实交互系统
CN111953848B (zh) * 2020-08-19 2022-03-11 Oppo广东移动通信有限公司 通过情景感知实现应用功能的系统、方法、相关装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954292A (zh) * 2012-05-30 2014-07-30 常州市新科汽车电子有限公司 导航仪根据道路车线对道路主辅路的匹配方法
CN105338249A (zh) * 2015-11-24 2016-02-17 努比亚技术有限公司 基于独立相机系统进行拍摄方法及移动终端
CN107659768A (zh) * 2017-08-08 2018-02-02 珠海全志科技股份有限公司 一种基于Android多应用共享摄像头的系统及方法
JP2019057767A (ja) * 2017-09-20 2019-04-11 株式会社デンソー 携帯端末、遠隔操作方法
CN110617826A (zh) * 2019-09-29 2019-12-27 百度在线网络技术(北京)有限公司 车辆导航中高架桥区识别方法、装置、设备和存储介质
CN111193870A (zh) * 2020-01-09 2020-05-22 华为终端有限公司 通过移动设备控制车载摄像头的方法、设备和系统
CN111405178A (zh) * 2020-03-09 2020-07-10 Oppo广东移动通信有限公司 基于Camera2的拍照方法、装置、存储介质及移动设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135448A (zh) * 2023-02-24 2023-11-28 荣耀终端有限公司 拍摄的方法和电子设备

Also Published As

Publication number Publication date
CN114519935B (zh) 2023-06-06
CN114519935A (zh) 2022-05-20
US20240125603A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
WO2022105759A1 (zh) 视频处理方法、装置及存储介质
JP2023514631A (ja) インタフェースレイアウト方法、装置、及び、システム
CN111666055B (zh) 数据的传输方法及装置
CN111125442B (zh) 数据标注方法及装置
WO2017219884A1 (zh) 服务图层生成方法、装置、终端设备和用户界面系统
WO2022078095A1 (zh) 一种故障检测方法及电子终端
WO2022156443A1 (zh) 车机连接方法及装置
WO2022105758A1 (zh) 道路识别方法以及装置
WO2022134691A1 (zh) 一种终端设备中啸叫处理方法及装置、终端
CN114241415A (zh) 车辆的位置监控方法、边缘计算设备、监控设备及系统
WO2022105716A1 (zh) 基于分布式控制的相机控制方法及终端设备
WO2023202407A1 (zh) 应用的显示方法、装置及存储介质
WO2022194005A1 (zh) 一种跨设备同步显示的控制方法及系统
WO2022105793A1 (zh) 图像处理方法及其设备
CN115114607A (zh) 分享授权方法、装置及存储介质
WO2022121751A1 (zh) 相机控制方法、装置和存储介质
WO2022166614A1 (zh) 针对控件操作的执行方法、装置、存储介质和控件
CN112699906A (zh) 获取训练数据的方法、装置及存储介质
WO2022179471A1 (zh) 卡证文本识别方法、装置和存储介质
CN116108118A (zh) 一种生成热力地图的方法及终端设备
CN117135448A (zh) 拍摄的方法和电子设备
CN116561459A (zh) 一种内容管理方法、电子设备及系统
CN117850718A (zh) 一种显示屏选择方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893905

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18253700

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21893905

Country of ref document: EP

Kind code of ref document: A1