CN113099170A - Method, apparatus, and computer storage medium for information processing - Google Patents

Method, apparatus, and computer storage medium for information processing Download PDF

Info

Publication number
CN113099170A
CN113099170A CN202010022097.1A CN202010022097A CN113099170A CN 113099170 A CN113099170 A CN 113099170A CN 202010022097 A CN202010022097 A CN 202010022097A CN 113099170 A CN113099170 A CN 113099170A
Authority
CN
China
Prior art keywords
electronic device
image
display
images
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010022097.1A
Other languages
Chinese (zh)
Other versions
CN113099170B (en
Inventor
应臻恺
孙中全
田发景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Electronic Equipment Manufacturing Co Ltd filed Critical Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority to CN202010022097.1A priority Critical patent/CN113099170B/en
Publication of CN113099170A publication Critical patent/CN113099170A/en
Application granted granted Critical
Publication of CN113099170B publication Critical patent/CN113099170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

According to example embodiments of the present disclosure, a method, apparatus, and computer-readable storage medium for information processing are provided. The method includes, at a server, if an instruction for displaying an image captured by a camera of a vehicle is received from an electronic device, sending the instruction to an associated in-vehicle electronic device to acquire a plurality of images regarding a plurality of physical areas in which a plurality of display devices are located, determining a physical area in which a user associated with the electronic device is located based on the plurality of images, and if the captured image is received from the in-vehicle electronic device, forwarding the image or a link regarding the image to a first display device of the physical area in which the user is located so as to display the image on the first display device. Therefore, the physical area where the user is located can be determined based on the plurality of images of the plurality of physical areas, the images collected by the vehicle-mounted camera device are forwarded to the display equipment of the physical area, the display of the images is intelligently circulated along with the position of the user, and the user experience is improved.

Description

Method, apparatus, and computer storage medium for information processing
Technical Field
Embodiments of the present disclosure relate generally to the field of information processing, and more particularly, to a method, apparatus, and computer storage medium for information processing.
Background
With the development of the internet of vehicles, more and more vehicles have networking capability. In order to improve the safety of vehicles, vehicles are also increasingly beginning to be equipped with on-vehicle camera devices in order to capture images of the surroundings of the vehicle. The smart phone can also receive the status information of the vehicle and execute remote vehicle control functions, such as remote start, remote air conditioning regulation, remote vehicle monitoring and the like, through the internet of vehicles platform.
Disclosure of Invention
The embodiment of the disclosure provides a method and equipment for information processing and a computer storage medium, which can determine a physical area where a user is located based on a plurality of images related to the physical areas, and forward an image acquired by a vehicle-mounted camera device or a link related to the image to display equipment of the physical area, so that the image acquired by the vehicle-mounted camera device is displayed and intelligently circulated along with the position of the user, and the user experience is improved.
In a first aspect of the disclosure, a method for information processing is provided. The method comprises the following steps: at a server, if an instruction for displaying an image captured by a camera of a vehicle is received from an electronic device, the instruction is transmitted to the in-vehicle electronic device, the in-vehicle electronic device is associated with the electronic device, a plurality of images of a plurality of physical areas in which a plurality of display devices are located are acquired, a physical area in which a user associated with the electronic device is located is determined based on the plurality of images, and if the captured image is received from the in-vehicle electronic device, the image or a link to the image is forwarded to a first display device of the physical area in which the user is located so as to display the image on the first display device.
In a second aspect of the disclosure, an electronic device is provided. The electronic device comprises at least one processing unit and at least one memory. At least one memory is coupled to the at least one processing unit and stores instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the electronic device to perform acts comprising: at a server, if an instruction for displaying an image captured by a camera of a vehicle is received from an electronic device, the instruction is transmitted to the in-vehicle electronic device, the in-vehicle electronic device is associated with the electronic device, a plurality of images of a plurality of physical areas in which a plurality of display devices are located are acquired, a physical area in which a user associated with the electronic device is located is determined based on the plurality of images, and if the captured image is received from the in-vehicle electronic device, the image or a link to the image is forwarded to a first display device of the physical area in which the user is located so as to display the image on the first display device.
In a third aspect of the disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program which, when executed by a machine, causes the machine to carry out any of the steps of the method described according to the first aspect of the disclosure.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 shows a schematic diagram of an example of an information handling environment 100, according to an embodiment of the present disclosure;
FIG. 2 shows a schematic flow diagram of a method 200 for information processing in accordance with an embodiment of the present disclosure;
FIG. 3 shows a schematic flow chart diagram of a method 300 for information processing in accordance with an embodiment of the present disclosure;
FIG. 4 shows a schematic flow chart diagram of a method 400 for information processing in accordance with an embodiment of the present disclosure; and
fig. 5 illustrates a schematic block diagram of an example device 500 that may be used to implement embodiments of the present disclosure.
Like or corresponding reference characters designate like or corresponding parts throughout the several views.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
At present, the experience of presenting the monitoring video of the vehicle through the smart phone is poor, mainly because the screen size of the smart phone is limited, how to present the monitoring video of the vehicle through the electronic equipment such as a television is utilized to improve the user experience.
To address, at least in part, one or more of the above issues and other potential issues, an example embodiment of the present disclosure proposes a scheme for information processing. In the scheme, at a server, if an instruction for displaying an image acquired by a camera of a vehicle is received from an electronic device, the instruction is sent to an on-vehicle electronic device, the on-vehicle electronic device is associated with the electronic device, a plurality of images of a plurality of physical areas where a plurality of display devices are located are acquired, a physical area where a user associated with the electronic device is located is determined based on the plurality of images, and if the acquired image is received from the on-vehicle electronic device, the image or a link to the image is forwarded to a first display device of the physical area where the user is located, so that the image is displayed on the first display device.
Therefore, the physical area where the user is located can be determined based on the plurality of images of the plurality of physical areas, the images collected by the vehicle-mounted camera device or the links of the images are forwarded to the display equipment of the physical area, the images acquired by the vehicle-mounted camera device are displayed intelligently along with the position of the user, and the user experience is improved.
Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
FIG. 1 shows a schematic diagram of an example of an information processing environment 100, according to an embodiment of the present disclosure. The information processing environment 100 includes an electronic apparatus 110, a server 120, an in-vehicle electronic apparatus 130, a plurality of image capturing devices 140, a plurality of display apparatuses 150, and a user 160. The in-vehicle electronic device 130, the plurality of image capturing apparatuses 140, and the plurality of display devices 150 may be associated with the electronic device 110, for example, via the same account.
In some embodiments, the electronic device 110 may be an electronic device that is capable of wireless transceiving and may access the internet. The electronic device 110 is, for example, but not limited to, a mobile phone, a smart phone, a laptop computer, a tablet computer, a Personal Digital Assistant (PDA), a wearable device, and the like.
In some embodiments, electronic device 110 may include at least a communication module, a memory, and a processor. The communication module is used to communicate with the server 120 through wireless communication technology such as wifi, cellular, etc. The memory is used to store one or more computer programs. The processor is coupled to the memory and executes the one or more programs to enable the electronic device 110 to perform one or more functions.
The onboard electronic device 130 is, for example, but not limited to, an onboard computer, an onboard controller, or the like. The in-vehicle electronic device 130 includes at least a processor and a memory. The memory is used to store one or more computer programs. The processor is coupled to the memory and executes the one or more programs to enable the in-vehicle electronic device to perform one or more functions. The in-vehicle electronic device 130 may be coupled to an in-vehicle camera to capture images or video of the interior environment or the exterior environment of the vehicle. The in-vehicle electronics 130 may be coupled to an in-vehicle microphone, such as a microphone, to capture audio input. The in-vehicle electronics 130 may be coupled to a communication module, such as a T-BOX, which may be used to communicate with the electronics 110, the server 120, and the display device 150.
The camera 140 may be, for example, a camera having wireless transceiving capability or capable of accessing the internet. The image pickup device 140 may include, for example, an optical image pickup device and an infrared image pickup device.
The display device 150 may be, for example, a display device with wireless transceiving capabilities or capable of accessing the internet, such as, but not limited to, a television, a projector, a personal computer, a display, and the like.
In some embodiments, camera 140 and display device 150 may include at least a communication module, a memory, and a processor. The communication module is used to communicate with the server 120 and the in-vehicle electronic device 130 through a wireless communication technology such as wifi, cellular, and the like, and/or a wired communication technology. The memory is used to store one or more computer programs. The processor is coupled to the memory and executes the one or more programs to enable the camera 140 and the display device 150 to perform one or more functions.
In some embodiments, the camera and its corresponding display device may be the same electronic device, such as an electronic device with a camera and a display, such as a laptop computer, tablet computer, personal computer, etc. with a camera.
The plurality of cameras 140 and the plurality of display devices 150 may be located in a plurality of physical areas, such as a plurality of rooms, the cameras 140 may correspond to the display devices 150, and the cameras 140 and their corresponding display devices 150 may be located in the same physical area, such as the same room. The plurality of cameras 140 and the plurality of display devices 150 may be associated with a predetermined location, such as a home location.
Although 3 cameras 140 and 3 display devices 150 are shown in fig. 1, it should be understood that more or fewer cameras 140 and display devices 150 may be included.
The server 120 includes, but is not limited to, personal computers, server computers, multiprocessor systems, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. In some embodiments, the server 120 may have one or more processing units, including special purpose processing units such as GPUs, FPGAs, ASICs, and the like, as well as general purpose processing units such as CPUs. In addition, one or more virtual machines may also be running on the server 120.
In some embodiments, a platform for car networking may be implemented in the server 120. In some embodiments, the server 120 may be part of a platform for car networking. Alternatively or additionally, in some embodiments, the server 120 may implement a platform for device management. Alternatively or additionally, in some embodiments, the server 120 may be part of a platform for device management. For example, the electronic apparatus 110, the in-vehicle electronic apparatus 130, the plurality of image pickup devices 140, and the plurality of display apparatuses 150 may register their information, such as identification information, location information, and a physical area where they are located, with the server 120 at the time of startup, or the user may register the above-described information of the plurality of image pickup devices 140 and the plurality of display apparatuses 150 with the server 120 through the electronic apparatus 110. The identification information may include, for example, a name, UUID, device serial number, network address, and the like.
The actions performed by the server 120 are described in detail below in conjunction with fig. 2.
Fig. 2 shows a flow diagram of a method 200 for information processing according to an embodiment of the present disclosure. For example, the method 200 may be performed by the server 120 as shown in FIG. 1. It should be understood that method 200 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the present disclosure is not limited in this respect.
At block 202, at the server 120, instructions from the electronic device 110 to display an image captured by a camera of the vehicle are received. In some embodiments, the instruction for displaying the image captured by the camera of the vehicle may be triggered by or included in an automatic driving instruction or an automatic parking instruction from the electronic device 110, for example. In some embodiments, the image may be a surveillance video stream relating to the interior environment or the exterior environment of the vehicle.
If an instruction from the electronic device 110 to display an image captured by a camera of the vehicle is received at block 202, the instruction is sent to the in-vehicle electronic device 130 at block 204, and the in-vehicle electronic device 130 is associated with the electronic device 110. After transmitting an instruction to display an image captured by the camera of the vehicle to the in-vehicle electronic apparatus 130, the in-vehicle electronic apparatus 130 may call the camera of the vehicle to capture the image, and then may transmit the captured image or a link regarding the image to the server 120.
At block 206, a plurality of images are acquired for a plurality of physical areas in which a plurality of display devices are located.
The plurality of images may be captured by a plurality of cameras, for example, and the plurality of cameras and the plurality of display devices may be associated with a predetermined location, for example. The predetermined location may comprise, for example, a home location. For example, information indicating correspondence relationships among a plurality of image pickup devices, a plurality of display devices, physical areas, and predetermined positions, such as a table, may be stored at the server 120. For example, the table may indicate that a home location is associated with the camera 1-3 and the display 1-3, the camera 1 and the display 1 being in the physical area 1, the camera 2 and the display 2 being in the physical area 2, and the camera 3 and the display 3 being in the physical area 3.
The physical area may comprise a room, for example. For example, a user's house may have 3 rooms, a living room, a main bedroom, and a sub bedroom, respectively, with 1 camera and 1 display device installed in each room, and 3 images may be captured via 3 cameras in the 3 rooms.
In some embodiments, acquiring the plurality of images regarding the plurality of physical areas in which the plurality of display apparatuses are located may include determining whether a plurality of image capturing devices have all been activated, transmitting an activation instruction to at least one image capturing device if it is determined that at least one image capturing device of the plurality of image capturing devices has not been activated, and acquiring the plurality of images regarding the plurality of physical areas in which the plurality of display apparatuses are located via the plurality of image capturing devices if it is determined that the plurality of image capturing devices have all been activated, the plurality of image capturing devices being associated with the plurality of display apparatuses and the predetermined position.
At block 208, based on the plurality of images, a physical area in which a user associated with the electronic device is located is determined. For example, multiple images may be identified based on image recognition techniques, determining a physical area in which a user associated with the electronic device is located. For example, techniques such as face recognition or human shape recognition may be employed to identify multiple images and determine a physical area in which a user associated with the electronic device is located.
At block 210, the captured image is received from the in-vehicle electronic device 130.
If the captured image is received from the in-vehicle electronic device 130 at block 210, the image or a link to the image is forwarded to the first display device 150 of the physical area where the user is located to display the image at the first display device 150 at block 212. The link to the image may include, for example, a link to a storage address of the image, such as a URL. For example, the server 120 may generate a link to a storage address of the image based on the captured image.
Therefore, the physical area where the user is located can be determined based on the plurality of images of the plurality of physical areas, the images collected by the vehicle-mounted camera device or the links of the images are forwarded to the display equipment of the physical area, the images acquired by the vehicle-mounted camera device are displayed intelligently along with the position of the user, and the user experience is improved.
Alternatively, in some embodiments, rather than receiving the captured image from the in-vehicle electronic device 130, the network address of the first display device is sent to the in-vehicle electronic device 130 at block 208 so that the in-vehicle electronic device 130 establishes a connection with the first display device 150 for transmission of the image. The network address may include, for example, an IP address or the like.
Therefore, the captured image can be directly transmitted between the vehicle-mounted electronic device 130 and the first display device 150 without being forwarded by the server 120, so that the delay of image transmission, particularly video transmission, is reduced, and the user experience is improved.
In some embodiments, determining that the physical area in which the user associated with the electronic device is located may include identifying a face in the plurality of images, and if it is determined that the face is identified in any of the plurality of images, determining whether the identified face matches the user associated with the electronic device, and if it is determined that the identified face matches the user, determining that the display device included in the physical area corresponding to the any of the images is the first display device.
Therefore, the physical area where the user associated with the electronic equipment is located and the corresponding display equipment can be determined based on the face recognition technology, so that the physical area where the user associated with the electronic equipment is located is accurately determined, the acquired image is presented through the display equipment included in the physical area, the image acquired by the vehicle-mounted camera device can be intelligently circulated along with the position of the user, and the user experience is improved.
Alternatively or additionally, in some embodiments, method 200 may further include forwarding an image from the in-vehicle electronic device or a link to the image to electronic device 110 for display of the image at electronic device 110 if it is determined that the identified face does not match user 160 associated with electronic device 110. For example, a facial image of a user associated with the electronic device 110 may be stored at the server 120. Determining whether the identified face matches a user associated with the electronic device may be determined using any suitable method, such as a deep neural network model.
Therefore, when it is determined that a person is present in a certain physical area but not a user associated with the electronic device, the image from the in-vehicle electronic device can be displayed by the electronic device of the user, so that the image is prevented from being displayed to other people, and the user experience is improved.
Alternatively or additionally, in some embodiments, method 200 may further include forwarding the image from the in-vehicle electronic device or a link to the image to electronic device 110 for display of the image at electronic device 110 if it is determined that no human face or human figure is recognized in any of the plurality of images.
Therefore, when it is determined that no person exists in the physical areas, the image from the vehicle-mounted electronic device can be displayed through the electronic device of the user, the image is prevented from being displayed to other people, and user experience is improved.
Alternatively or additionally, in some embodiments, acquiring the plurality of images with respect to the plurality of physical areas in which the plurality of display devices are located may include determining a distance between the location of the electronic device 110 and a predetermined location, and capturing the plurality of images via the plurality of cameras 140 located in the plurality of physical areas if the distance is determined to be less than a threshold distance.
The location of the electronic device 110 may, for example, be previously received from the electronic device 110. The threshold distance may include, for example, 5 meters, 3 meters, 1 meter, and so on. For example, if it is determined that the location of the electronic device 110 is less than a threshold distance, e.g., 3 meters, from the home location, a plurality of images may be captured via a plurality of cameras located in a plurality of rooms in the home to determine the room in which the user is located.
Therefore, the plurality of camera devices at the preset positions can be called to determine the physical areas where the users are located when the users approach or are located at the preset positions, the calling time is more accurate, and the user experience is improved.
Alternatively or additionally, in some embodiments, forwarding the image or the link to the image to the first display device of the physical area in which the user is located includes determining whether the first display device is activated, sending an activation instruction to the first display device or an instruction to activate the first display device to the electronic device if it is determined that the first display device is not activated, and forwarding the image or the link to the image to the first display device if it is determined that the first display device is activated.
In some embodiments, the server 120 may send an inquiry message to the first display device for inquiring whether the first display device is activated, determine that the first display device is not activated if no feedback is received within a predetermined time interval, and determine that the first display device is activated otherwise. In other embodiments, the first display device may report its status to the server 120, the server 120 may store the status of the first display device, and the server 120 may determine whether the first display device is powered on based on the stored status of the first display device.
In some embodiments, the server 120 may send a launch instruction to the first display device to launch the first display device. For example, the first display device may have a remote start function, and the communication module may listen to a remote start instruction, and if the remote start instruction is received, may start to receive and display the related content.
Alternatively or additionally, in some embodiments, the server 120 may send an instruction to the electronic device 110 to activate the first display device, such that the electronic device 110 sends an activation instruction to the first display device to activate the first display device. For example, the electronic device 110 may have a function for activating the first display device. In some embodiments, the electronic device 110 may be equipped with an infrared device that may send an infrared activation instruction to a corresponding infrared device of the first display device to activate the first display device. In other embodiments, the electronic device 110 may activate the first display device through other wireless technologies, such as wifi, for example, the electronic device 110 may be in the same local area network as the first display device, the electronic device 110 may send an activation instruction to the first display device through the local area network, and the first display device may monitor the local area network and activate after receiving the activation instruction from the electronic device 110.
Therefore, under the condition that the first display device is not started, the first display device is started first, then the image from the vehicle-mounted electronic device is forwarded, and the image from the vehicle-mounted electronic device can be directly forwarded to the first display device under the condition that the first display device is started, so that the problem of how to display the image when the display device is not started is solved, and the user experience is improved.
Alternatively or additionally, in some embodiments, method 200 may further include forwarding an image from the in-vehicle electronic device or a link to the image to electronic device 110 for display of the image at the electronic device if it is determined that the distance between the location of electronic device 110 and the predetermined location is greater than or equal to the threshold distance. For example, if the user leaves home with the electronic device 110, an image from the in-vehicle electronic device is forwarded onto the electronic device 110.
Therefore, under the condition that the user is far away from the preset position, the image from the vehicle-mounted electronic equipment is displayed through the electronic equipment, and the user experience is improved.
Alternatively or additionally, in some embodiments, the method 200 may further include sending a turn off instruction or a standby instruction to the first display device if it is determined that the distance between the location of the electronic device 110 and the predetermined location is greater than or equal to the threshold distance. For example, if the user leaves home with the electronic device 110, meaning that the display of images may be cycled with the user's location, the first display device currently displaying images may be turned off or made standby.
Therefore, under the condition that the user is far away from the preset position, the first display device which is started and displays the image from the vehicle-mounted electronic device is turned off or stands by, so that the situation that the first display device is still in a starting state under the unmanned condition is avoided, energy waste is avoided, and the user experience is improved.
Fig. 3 shows a flow diagram of a method 300 for information processing according to an embodiment of the present disclosure. For example, the method 300 may be performed by the server 120 as shown in FIG. 1. It should be understood that method 300 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 302, at the server 120, instructions from the electronic device to display an image captured by a camera of the vehicle are received.
If an instruction from the electronic device to display an image captured by a camera of the vehicle is received at block 302, an instruction is sent to the in-vehicle electronic device 130 at block 304, and the in-vehicle electronic device 130 is associated with the electronic device 110.
At block 306, a distance between the location of the electronic device 110 and the predetermined location is determined.
At block 308, it is determined whether the distance is less than a threshold distance.
If it is determined at block 308 that the distance is less than the threshold distance, then at block 310, a plurality of images are captured via the plurality of cameras 140 located in the plurality of physical areas with respect to the plurality of physical areas in which the plurality of display devices are located.
At block 312, faces are identified in the plurality of images.
If it is determined at block 312 that a face is identified in any of the plurality of images, at block 314, it is determined whether the identified face matches a user associated with the electronic device.
If it is determined at block 314 that the recognized face matches a user associated with the electronic device, then the display device included in the physical region corresponding to the any image is determined to be the first display device at block 316.
At block 318, the captured image is received from the in-vehicle electronic device.
If the captured image is received from the in-vehicle electronic device at block 318, the image or a link to the image is forwarded to the first display device for display of the image at the first display device at block 320.
Therefore, when the user approaches or is in the preset position, the physical area where the user is located can be determined based on the images captured by the plurality of camera devices associated with the preset position, and the images from the vehicle-mounted electronic equipment are forwarded to the display equipment corresponding to the physical area, so that the opportunity of calling the camera devices to determine the position where the user is located is more accurate, the images collected by the vehicle-mounted electronic equipment are intelligently displayed along with the position of the user, and the user experience is improved.
Fig. 4 shows a flow diagram of a method 400 for information processing according to an embodiment of the present disclosure. It should be understood that method 400 may also include additional steps not shown and/or may omit steps shown, as the scope of the disclosure is not limited in this respect.
At 402, the electronic device 110 sends an instruction to the server 120 to display a surveillance video stream captured by a camera of a vehicle about an interior environment or an exterior environment of the vehicle.
At 404, the server 120 sends the instruction to the in-vehicle electronic device 130, and the in-vehicle electronic device 130 is associated with the electronic device 110.
At 406, the server 120 establishes a first connection channel with the in-vehicle electronic device 130 for monitoring the video stream.
At 408, the server 120 determines a distance between the location of the electronic device 110 and the predetermined location.
At 410, the server 120 determines whether the distance is less than a threshold distance.
If it is determined at 410 that the distance is less than the threshold distance, then at 412, 414, 416, the server 120 sends instructions to the plurality of cameras 140 to capture images of a plurality of physical areas in which a plurality of display devices are located, the plurality of cameras and the plurality of display devices being associated with predetermined locations. Although 3 cameras are shown in fig. 4, it should be understood that the number of cameras may be greater or fewer.
At 418, 420, 422, multiple cameras 140 capture multiple images.
At 424, 426, 428, the plurality of cameras 140 transmit the captured plurality of images to the server 120.
At 430, the server 120 identifies faces in the plurality of images.
If at 430 it is determined that a face is identified in any of the plurality of images, at 432 the server 120 determines if the identified face matches a user associated with the electronic device.
If, at 432, the server 120 determines that the recognized face matches a user associated with the electronic device, then, at 434, the server 120 determines that the display device included in the physical region corresponding to either image is the first display device.
At 436, the server 120 establishes a second connection channel with the first display device 150 for monitoring the video stream.
At 438, the in-vehicle electronic device 130 transmits the surveillance video stream to the server 120 via the first connection channel.
At 440, the server 120 forwards the surveillance video stream to the first display device 150 via the second connection channel.
At 442, the first display device 150 displays the surveillance video stream.
Fig. 5 illustrates a schematic block diagram of an example device 500 that may be used to implement embodiments of the present disclosure. For example, the server 120 as shown in FIG. 1 may be implemented by the device 500. As shown, device 500 includes a Central Processing Unit (CPU)510 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)520 or loaded from a storage unit 580 into a Random Access Memory (RAM) 530. In the RAM 530, various programs and data required for the operation of the device 500 can also be stored. The CPU 510, ROM 520, and RAM 530 are connected to each other by a bus 540. An input/output (I/O) interface 550 is also connected to bus 540.
Various components in device 500 are connected to I/O interface 550, including: an input unit 560 such as a keyboard, a mouse, a microphone, and the like; an output unit 570 such as various types of displays, speakers, and the like; a storage unit 580 such as a magnetic disk, an optical disk, or the like; and a communication unit 590 such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 590 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes described above, such as the method 200-400, may be performed by the processing unit 510. For example, in some embodiments, the method 200-400 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 580. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 520 and/or the communication unit 590. When the computer program is loaded into RAM 530 and executed by CPU 510, one or more of the acts of method 200 and 400 described above may be performed.
The present disclosure may be methods, apparatus, systems, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A method for information processing, comprising:
at a server, in response to receiving an instruction from an electronic device to display an image captured by a camera of a vehicle, transmitting the instruction to an in-vehicle electronic device associated with the electronic device;
acquiring a plurality of images of a plurality of physical areas where a plurality of display devices are located;
determining a physical area in which a user associated with the electronic device is located based on the plurality of images; and
in response to receiving the captured image from the in-vehicle electronic device, forwarding the image or a link to the image to a first display device of a physical area in which the user is located, so as to display the image on the first display device.
2. The method of claim 1, wherein determining a physical area in which a user associated with the electronic device is located comprises:
in response to determining that a face is identified in any of the plurality of images, determining whether the identified face matches a user associated with the electronic device; and
in response to determining that the identified face matches the user, determining that a display device included in a physical region corresponding to the any image is the first display device.
3. The method of claim 2, further comprising:
in response to determining that the identified face does not match the user, forwarding the image or a link to the image from the in-vehicle electronic device to the electronic device for display of the image at the electronic device.
4. The method of claim 1, further comprising:
in response to determining that no human face or human figure is recognized in any of the plurality of images, forwarding the image or a link to the image from the in-vehicle electronic device to the electronic device for display of the image at the electronic device.
5. The method of claim 1, wherein the plurality of images are captured by a plurality of cameras, the plurality of cameras and the plurality of display devices associated with predetermined locations.
6. The method of claim 5, wherein acquiring a plurality of images for a plurality of physical areas in which a plurality of display devices are located comprises:
determining a distance between the location of the electronic device and the predetermined location; and
in response to determining that the distance is less than a threshold distance, capturing the plurality of images via the plurality of cameras located in the plurality of physical areas.
7. The method of claim 6, wherein forwarding the image or the link to the image to the first display device of the physical area in which the user is located comprises:
in response to determining that the first display device is not started, sending a start instruction to the first display device or sending an instruction for starting the first display device to the electronic device; and
in response to determining that the first display device has been launched, forwarding the image or the link to the image to the first display device.
8. The method of claim 6, further comprising:
in response to determining that the distance is greater than or equal to the threshold distance, forwarding the image or a link to the image from the in-vehicle electronic device to the electronic device for display of the image at the electronic device.
9. The method of claim 8, further comprising:
in response to determining that the distance is greater than or equal to the threshold distance, sending a turn-off instruction or a standby instruction to the first display device.
10. The method of claim 1, wherein the image is a surveillance video stream relating to an interior environment or an exterior environment of the vehicle.
11. An electronic device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit, cause the apparatus to perform the steps of the method of any of claims 1 to 10.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a machine, implements the method of any of claims 1-10.
CN202010022097.1A 2020-01-09 2020-01-09 Method, apparatus and computer storage medium for information processing Active CN113099170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010022097.1A CN113099170B (en) 2020-01-09 2020-01-09 Method, apparatus and computer storage medium for information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010022097.1A CN113099170B (en) 2020-01-09 2020-01-09 Method, apparatus and computer storage medium for information processing

Publications (2)

Publication Number Publication Date
CN113099170A true CN113099170A (en) 2021-07-09
CN113099170B CN113099170B (en) 2023-05-12

Family

ID=76663679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010022097.1A Active CN113099170B (en) 2020-01-09 2020-01-09 Method, apparatus and computer storage medium for information processing

Country Status (1)

Country Link
CN (1) CN113099170B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0721422D0 (en) * 2007-10-31 2007-12-12 Symbian Software Ltd Method and system for providing media content to a reproduction apparatus
CN103650485A (en) * 2011-07-12 2014-03-19 日产自动车株式会社 Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method
CN105578229A (en) * 2015-12-15 2016-05-11 小米科技有限责任公司 Electronic equipment control method and device
CN107438064A (en) * 2016-05-27 2017-12-05 通用汽车环球科技运作有限责任公司 Response is started to the video camera of vehicle safety event
CN107436815A (en) * 2016-05-28 2017-12-05 富泰华工业(深圳)有限公司 Information display system and display methods
CN110140337A (en) * 2017-01-17 2019-08-16 高通股份有限公司 User location perceives intelligent event handling
CN110336892A (en) * 2019-07-25 2019-10-15 北京蓦然认知科技有限公司 A kind of more equipment collaboration methods, device
CN110537165A (en) * 2017-10-26 2019-12-03 华为技术有限公司 A kind of display methods and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0721422D0 (en) * 2007-10-31 2007-12-12 Symbian Software Ltd Method and system for providing media content to a reproduction apparatus
CN103650485A (en) * 2011-07-12 2014-03-19 日产自动车株式会社 Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method
CN105578229A (en) * 2015-12-15 2016-05-11 小米科技有限责任公司 Electronic equipment control method and device
CN107438064A (en) * 2016-05-27 2017-12-05 通用汽车环球科技运作有限责任公司 Response is started to the video camera of vehicle safety event
CN107436815A (en) * 2016-05-28 2017-12-05 富泰华工业(深圳)有限公司 Information display system and display methods
CN110140337A (en) * 2017-01-17 2019-08-16 高通股份有限公司 User location perceives intelligent event handling
CN110537165A (en) * 2017-10-26 2019-12-03 华为技术有限公司 A kind of display methods and device
CN110336892A (en) * 2019-07-25 2019-10-15 北京蓦然认知科技有限公司 A kind of more equipment collaboration methods, device

Also Published As

Publication number Publication date
CN113099170B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
EP3232343A1 (en) Method and apparatus for managing video data, terminal, and server
US20190092345A1 (en) Driving method, vehicle-mounted driving control terminal, remote driving terminal, and storage medium
CN107786794B (en) Electronic device and method for providing an image acquired by an image sensor to an application
RU2625338C1 (en) Method, device and system for installation of wireless network connection
US20160044269A1 (en) Electronic device and method for controlling transmission in electronic device
CN110113729A (en) The communication means and mobile unit of mobile unit
CN112040468B (en) Method, computing device, and computer storage medium for vehicle interaction
KR101099838B1 (en) Remote a/s method using video phone call between computer and mobile phone
CN113282962B (en) Privacy-related request processing method, processing device and storage medium
EP2963889A1 (en) Method and apparatus for sharing data of electronic device
CN105515831A (en) Network state information display method and device
CN104330985A (en) Information processing method and device
US20150293670A1 (en) Method for operating message and electronic device therefor
CN108551525B (en) State determination method of movement track and mobile terminal
CN108012270B (en) Information processing method, equipment and computer readable storage medium
CN109729582B (en) Information interaction method and device and computer readable storage medium
CN106572131A (en) Media data sharing method and system in Internet of things
CN110535754B (en) Image sharing method and device
US10149137B2 (en) Enhanced communication system
CN113099170B (en) Method, apparatus and computer storage medium for information processing
CN113726905B (en) Data acquisition method, device and equipment based on home terminal equipment
CN115546952A (en) Method and device for managing parent access through cloud, electronic equipment and storage medium
CN112911241A (en) Vehicle remote monitoring system, method, device, equipment and storage medium
CN210534865U (en) Sign-in system
CN113822216A (en) Event detection method, device, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201821 room 208, building 4, No. 1411, Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai

Applicant after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Address before: Room 208, building 4, No. 1411, Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai 201821

Applicant before: SHANGHAI PATEO ELECTRONIC EQUIPMENT MANUFACTURING Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 3701, No. 866 East Changzhi Road, Hongkou District, Shanghai, 200080

Patentee after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Country or region after: China

Address before: 201821 room 208, building 4, No. 1411, Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai

Patentee before: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address