Controlling Delivery Of Captured Streams

Shaw; Philip ;   et al.

Patent Application Summary

U.S. patent application number 15/736564 was filed with the patent office on 2018-05-17 for controlling delivery of captured streams. The applicant listed for this patent is PIKSEL, INC.. Invention is credited to Kristan Bullett, Mark Christie, Sean Everett, Fabrice Hamaide, Peter Heiland, Hans-Jurgen Maas, Philip Shaw, Ralf Tilmann, Miles Weaver.

Application Number20180139472 15/736564
Document ID /
Family ID56132942
Filed Date2018-05-17

United States Patent Application 20180139472
Kind Code A1
Shaw; Philip ;   et al. May 17, 2018

CONTROLLING DELIVERY OF CAPTURED STREAMS

Abstract

There is provided a technique for providing streaming services, comprising: a plurality of capture devices, each for generating a captured stream of content; a server, for receiving the plurality of captured streams, and for outputting at least one output stream; and an editing device for outputting a control signal to the server, wherein the server processes captured streams to provide one or more modified output stream in dependence on the control signal.


Inventors: Shaw; Philip; (York, GB) ; Tilmann; Ralf; (Mannheim, DE) ; Maas; Hans-Jurgen; (Mainz, DE) ; Heiland; Peter; (Dover, MA) ; Hamaide; Fabrice; (Paris, FR) ; Bullett; Kristan; (York, GB) ; Weaver; Miles; (York, GB) ; Everett; Sean; (New York, NY) ; Christie; Mark; (London, GB)
Applicant:
Name City State Country Type

PIKSEL, INC.

Wilmington

DE
Family ID: 56132942
Appl. No.: 15/736564
Filed: June 15, 2016
PCT Filed: June 15, 2016
PCT NO: PCT/EP2016/063797
371 Date: December 14, 2017

Related U.S. Patent Documents

Application Number Filing Date Patent Number
62175878 Jun 15, 2015

Current U.S. Class: 1/1
Current CPC Class: H04N 21/2353 20130101; H04N 21/44029 20130101; H04N 21/25866 20130101; H04N 21/232 20130101; H04N 21/242 20130101; H04N 21/44204 20130101; H04N 21/4334 20130101; H04N 21/8186 20130101; H04N 21/2541 20130101; H04N 21/433 20130101; H04N 21/2187 20130101; H04N 21/414 20130101; H04N 21/8358 20130101; H04H 60/46 20130101; H04N 21/21805 20130101; H04N 21/4307 20130101; H04L 65/607 20130101; H04N 21/23106 20130101; H04N 21/234336 20130101; H04N 21/23418 20130101; H04N 21/4305 20130101; H04N 21/23439 20130101; H04N 21/4223 20130101; H04N 21/26233 20130101; H04N 21/6332 20130101; H04N 21/2343 20130101; H04N 21/2358 20130101; H04N 21/440236 20130101; H04N 21/26613 20130101; H04N 21/2665 20130101; H04N 21/4828 20130101; H04N 21/41407 20130101; H04N 21/4431 20130101; H04N 21/235 20130101; H04N 21/4331 20130101; H04N 21/633 20130101; H04N 21/8547 20130101; H04N 21/2668 20130101; H04N 21/2407 20130101; H04N 21/26208 20130101; H04N 21/440263 20130101; H04N 21/2402 20130101; H04N 21/84 20130101
International Class: H04N 21/218 20060101 H04N021/218; H04N 21/2187 20060101 H04N021/2187; H04N 21/2343 20060101 H04N021/2343; H04N 21/633 20060101 H04N021/633

Claims



1. A system for providing streaming services comprising: a plurality of capture devices, each for generating a captured stream of content; and a server, for receiving the captured streams of content, for delivering one or more output streams, and for providing metrics to at least one of the plurality of capture devices, the metrics to configure capture properties of at least one of the capture devices.

2.-3. (canceled)

4. The system of claim 40 wherein the server is provided with metadata one of associated with and embedded in each of the captured streams of content, the server being configured to align the timing of the control signal additionally with the metadata of the captured streams of content.

5.-8. (canceled)

9. The system of claim 40 wherein the control signal is generated by the editing device under control of a user of the editing device, the user observing events captured by the plurality of capture devices, wherein the user adjusts the control signal in accordance with a determination as to which of the capture devices should be chosen at any instant in time, the determination identifying which of the captured streams of content should be selected at any instant in time in dependence on the user's determination of one or more of the capture devices to select at that instant.

10. The system of claim 40 wherein the server is configured to apply an overlay to one or more of the output streams in dependence on the control signal.

11. The system of claim 10 wherein one or more of the output streams comprise two or more of the captured streams of content to which one of an overlay and a video effect is applied.

12. (canceled)

13. The system of claim 40 wherein the editing device receives a representation of each or a subset of the captured streams of content.

14.-16. (canceled)

17. The system of claim 40 wherein the editing device is that of a rights holder, and wherein the output streams are associated with a rights holder event.

18. The system of claim 17 wherein the control data is an instruction for the server to one of apply and alter rights control information applied to the output streams.

19. (canceled)

20. A method for providing streaming services comprising: generating, at a plurality of capture devices, captured streams of content; receiving, at a server, the captured streams of content, and outputting one or more output streams; and providing metrics to at least one of the plurality of capture devices, the metrics to configure capture properties of at least one of the capture devices.

21.-22. (canceled)

23. The method of claim 42 wherein the server is provided with metadata one of associated with and embedded in each of the captured streams of content, the server being configured to align the timing of the control signal additionally with the metadata of the captured streams of content.

24.-27. (canceled)

28. The method of claim 42 wherein the control signal is generated by the editing device under control of a user of the editing device, the user observing events being captured by the plurality of capture devices, wherein the user adjusts the control signal in accordance with a determination as to which of the capture devices should be chosen at any instant in time, the determination identifying which of the captured streams of content should be selected at any instant in dependence on the user's determination of one or more of the capture devices to select at that instant.

29. The method of claim 42 wherein the server is configured to apply an overlay to one or more of the output streams in dependence on the control signal.

30. (canceled)

31. The method of claim 29 wherein the server applies the overlay to at least two of the captured streams of content, which as a composite comprise the output streams.

32. (canceled)

33. The method of claim 42 wherein the editing device receives a representation of one or more of the captured streams of content.

34.-36. (canceled)

37. The method of claim 42 wherein the editing device is that of a rights holder, and wherein the output streams are associated with a rights holder event.

38. The method of claim 37 wherein the control data is an instruction for the server to one of apply and alter rights control information applied to the output streams.

39. (canceled)

40. The system of claim 1 further comprising: an editing device for outputting a control signal to the server, wherein the server delivers the one or more output streams as a modified version of one or more of the captured streams of content in dependence on the control signal.

41. The system of claim 1 wherein the metrics comprise one of: ambient noise level indicator, white balance, colour saturation and light meter.

42. The method of claim 20 further comprising: outputting a control signal to the server from an editing device, wherein the server delivers the one or more output streams as a modified version of one or more of the captured streams of content in dependence on the control signal.

43. The method of claim 20 wherein the metrics comprise one of: ambient noise level indicator, white balance, colour saturation and light meter.
Description



BACKGROUND TO THE INVENTION

Field of the Invention

[0001] The invention is concerned with a technique for receiving streams from a plurality of capture devices at a central server, and processing the captured streams such that a modified output stream is generated therefrom in dependence on a control signal.

Description of the Related Art

[0002] It is known to provide data streams from a plurality of different capture devices. The data streams may be provided for viewing by viewing devices. The capture and viewing devices may be, for example, mobile phones.

[0003] Locating captured streams to view can be difficult, and in particular selecting streams to view from a plurality of existing streams can be difficult.

[0004] It is an aim to provide improvements.

SUMMARY OF THE INVENTION

[0005] There is provided a system for providing streaming services, comprising: a plurality of capture devices, each for generating a captured stream of content; a server, for receiving the plurality of captured streams, and for outputting one or more output streams; and an editing device for outputting a control signal to the server, wherein the server delivers the one or more outputs streams as a modified version of one or more of the input streams in dependence on the control signal.

[0006] The server may align the timing of the control signal with the timing of the captured streams. The control signal may comprise one or more timing marks, and the server is configured to align said one or more timing marks with a time line of received capture data streams.

[0007] The server may be provided with metadata associated with or embedded in each captured data stream, the server being configured to align the timing of the control signal additionally with the metadata of the captured data stream.

[0008] The server may automatically generate the output stream in dependence on the control signal.

[0009] The control signal may identify which of the plurality of captured streams is to be output from the server. Each captured stream may be mapped to an output. One or more captured streams may be mapped to an output. One or more captured streams may be mapped to one or more outputs.

[0010] One or more output streams may comprise two or more input streams as a composite of videos, effects, transitions or a combination thereof. Examples area split screen arrangement or two or more input streams with a crossfade.

[0011] In general, the control signal is used to determine which of a plurality of input signals are mapped to one or more output signals, and in addition to apply any modification to those input signals as mapped to an output signal. These modifications may include applying video effects, applying transitions, applying an overlay etc.

[0012] The control signal may be generated by the editing device under the control of a user of the editing device, said user observing the events being captured by the plurality of capture devices, wherein the user adjusts the control signal in accordance with a determination as to which capture device should be chosen at any instant in time, the determination identifying which of the plurality of capture streams should be selected at any instant in dependence on the users determination of the capture device or devices to select at that instant.

[0013] The server may be configured to apply an overlay to the output stream in dependence on the control signal. The server may apply an overlay to a captured data stream, which captured data stream is routed to the output stream. The server may apply the overlay to at least two captured data streams which as a composite comprise an output stream.

[0014] The plurality of captured streams may capture live events, and the viewing stream is a live event stream.

[0015] The editing device may receive a representation of each or a subset of the captured data streams. The editing device may receive a lower bandwidth version of the one or more captured data streams.

[0016] An editing application may be provided on the editing device.

[0017] The editing device may be a mobile device.

[0018] The editing device may be a device of a rights holder, wherein the output stream is associated with a rights holder event. The control data may be an instruction for the server to apply or alter rights control information applied to the output stream. The rights control information may be digital rights management information.

[0019] There is provided a method for providing streaming services, comprising: generating, at a plurality of capture devices, a captured stream of content; receiving, at a server, the plurality of captured streams, and outputting one or more output streams; and outputting a control signal to the server from an editing device, wherein the server delivers the one or more outputs streams as a modified version of one or more of the input streams in dependence on the control signal.

[0020] The method may further comprise aligning the timing of the control signal with the timing of the captured streams. The control signal may comprise one or more timing marks, and the server is configured to align said one or more timing marks with a time line of received capture data streams.

[0021] The method may be provided with metadata associated with or embedded in each captured data stream, the server being configured to align the timing of the control signal additionally with the metadata of the captured data stream.

[0022] The method may automatically generate the output stream in dependence on the control signal.

[0023] The control signal may identify which of the plurality of captured streams is to be output from the server. Each captured stream may be mapped to an output.

[0024] The control signal may be generated by the editing device under the control of a user of the editing device, said user observing the events being captured by the plurality of capture devices, wherein the user adjusts the control signal in accordance with a determination as to which capture device should be chosen at any instant in time, the determination identifying which of the plurality of capture streams should be selected at any instant in dependence on the users determination of the capture device or devices to select at that instant.

[0025] The server may be configured to apply an overlay to the output stream in dependence on the control signal. The server may apply an overlay to a captured data stream, which captured data stream is routed to the output stream. The server may apply the overlay to at least two captured data streams which as a composite comprise an output stream.

[0026] The plurality of captured streams may capture live events, and the viewing stream may be a live event stream. The editing device may receive a representation of each or a subset of the captured data streams. The editing device may receive a lower bandwidth version of the one or more captured data streams.

[0027] The method may provide an editing application on the editing device.

[0028] In an example there is provided a method of: receiving a plurality of data streams from a plurality of capture devices at a streaming server; receiving stream edit control signals for an edit device; processing the plurality of data streams under the control of the edit device; and in dependence on the processing, generating one or more viewing streams. This aspect may be referred to as director app.

[0029] The plurality of capture devices and the edit device are connected to the streaming server by a network. The network may be a public network, e.g. the Internet. The connection of the plurality of capture devices and the edit device to the streaming server by a network is independent connection.

[0030] The step of processing the plurality of data streams comprises receiving the plurality of data streams, and editing at least one of the plurality of data streams to provide one or more output streams.

[0031] The step of processing may comprise transmitting a control signal to a capture device. The control signal may be to manipulate the capture device. The control signal may control a parameter of the capture device.

[0032] Since the edit device is connected via a network such as the open Internet, the edit control signals may be synchronised to be applied to the video at the correct times.

[0033] Whilst synchronisation need to be maintained between streams, the edit device also needs to have its signals synchronised to those streams and/or from one set of low quality streams used for editing to the high quality streams used for viewing.

[0034] The edit device may display previews of the video content in one or more video streams received from the plurality of capture devices. Where the edit device displays previews of a plurality of streams, the edit device selects one or more streams to be viewed.

[0035] The edit device may display stored video content. The previously stored video content may be live-streamed. The edit device may edit the content for retransmission.

BRIEF DESCRIPTION OF THE FIGURES

[0036] The invention is now described by way of reference to the following figures, in which:

[0037] FIG. 1 illustrates an example of a system architecture in which described examples and embodiments as described may be implemented;

[0038] FIG. 2 illustrates an example scenario for controlling captured streams; and

[0039] FIG. 3 illustrates a further example scenario for controlling captured streams.

DESCRIPTION OF PREFERRED EMBODIMENTS

[0040] With reference to FIG. 1 there is illustrated a system architecture within which embodiments may be implemented.

[0041] With reference to FIG. 1 there is illustrated: a plurality of devices, labelled capture devices, denoted by reference numerals 12a, 12b, 12c; a plurality of devices, labelled viewing devices, denoted by reference numerals 16a, 16b; a device, labelled editing device, denoted by reference numeral 20a; a network denoted by reference numeral 4; and a server denoted by reference numeral 2.

[0042] Each of the devices 12a, 12b, 12c is referred to as a capture device as in the described embodiments of the invention the devices capture content. However the devices are not limited to capturing content, and may have other functionality and purposes. In examples each capture device 12a, 12b 12c may be a mobile device such as a mobile phone.

[0043] Each of the capture devices 12a, 12b, 12c may capture an image utilising a preferably integrated image capture device (such as a video camera), and may thus generate a video stream on a respective communication line 14a, 14b, 14c. The respective communication lines 14a, 14b, 14c provide inputs to the network 4, which is preferably a public network such as the Internet. The communication lines 14a, 14b, 14c are illustrated as bi-directional, to show that the capture devices 12a, 12b, 12c may receive signals as well as generate signals.

[0044] The server 2 is configured to receive inputs from the capture devices 12a, 12b, 12c as denoted by the bi-directional communication lines 6, connected between the server 2 and the network 4. In embodiments, the server 2 receives a plurality of video streams from the capture devices, as the signals on lines 14a, 14b, 14c are video streams.

[0045] The server 2 may process the video streams received from the capture devices as will be discussed further hereinbelow.

[0046] The server 2 may generate further video streams on bi-directional communication line 6 to the network 4, to the bi-directional communication lines 18a, 18b, associated with the devices 16a, 16b respectively.

[0047] Each of the devices 16a, 16b is referred to as a viewing device as in the described embodiments of the invention the devices allow content to be viewed. However the devices are not limited to providing viewing of content, and may have other functionality and purposes. In examples each viewing device 16a, 16b may be a mobile device such as a mobile phone.

[0048] The viewing devices 16a and 16b may be associated with a display (preferably an integrated display) for viewing the video streams provided on the respective communication lines 18a, 18b.

[0049] A single device may be both a capture device and a viewing device. Thus, for example, a mobile phone device may be enabled in order to operate as both a capture device and a viewing device.

[0050] A device operating as a capture device may generate multiple video streams, such that a capture device such as capture device 12a may be connected to the network 4 via multiple video streams, with multiple video streams being provided on communication line 14a.

[0051] A viewing device may be arranged in order to receive multiple video streams. Thus a viewing device such as viewing device 16a may be arranged to receive multiple video streams on communication line 18a.

[0052] A single device may be a capture device providing multiple video streams and may be a viewing device receiving multiple video streams.

[0053] Each capture device and viewing device is connected to the network 4 with a bi-directional communication link, and thus one or all of the viewing devices 16A, 16B may provide a signal to the network 6 in order to provide a feedback or control signal to the server 2. The server 2 may provide control signals to the network 4 in order to provide control signals to one or more of the capture devices 12a, 12b, 12c.

[0054] The capture devices 12a, 12b, 12c are preferably independent of each other, and are independent of the server 2. Similarly the viewing devices 16a, 16b are preferably independent of each other, and are independent of the server 2.

[0055] The capture devices 12a, 12b, 12b are shown in FIG. 1 as communicating with the server 2 via a single network 4. In practice the capture devices 12a, 12b, 12c may be connected to the server 2 via multiple networks, and there may not be a common network path for the multiple capture devices to the server 2. Similarly the viewing devices 16a, 16b may be connected to the server 2 via multiple networks, and there may not be a single common network path from the server 2 to the viewing devices 16a, 16b.

[0056] The system architecture of FIG. 1 may be used to stream live content from capture devices to the server, and then for viewing devices to access the live content in streams from the server.

[0057] FIG. 2 illustrates two capture devices 500 and 502 which may correspond to the capture devices 12a, 12b, 12c of FIG. 1; a so-called director capture device which correspond to one of the capture devices of FIG. 1, or the editing device 20 of FIG. 1, and a network 506 which may correspond to the network 4 of FIG. 1.

[0058] In FIG. 2, as shown the device 504 may receive video streams from capture devices 502, and provide a consolidated video stream to a network 506.

[0059] It will be understood that the arrangement illustrated in FIG. 2 is exemplary. In embodiments the director device 504 may receive the captured streams directly. In other examples the captured streams may be provided directly to the server as described above, and the director device may provide control inputs to the server 2.

[0060] FIG. 3 illustrates streaming functional modules of the server 4 of FIG. 1, including an event recognition module 514 and a director module 516. Reference numeral 508 denotes the streaming functional module of the server 2.

[0061] As shown in FIG. 3, the streaming server 508 may be associated with the module 514 which determines a recognition of an event, and provide a video stream as either being associated with an event or not associated with an event.

[0062] In general, there is provided an application that offers features for controlling, editing, processing or otherwise managing one or more live streams. This application may be a secondary application, provided in addition to a main application associated with a service for delivering and accessing live streams.

[0063] For a given event, a publisher may use this application to take one or more input streams from capture devices and brand them as a single, coherent collection of synchronised streams. These streams may originate from devices the publisher operates, or may be crowd-sourced from consumer devices present at an event, a concert, a collection of outside broadcast locations etc.

[0064] Much of the content captured at a particular device may be configured, marshalled and controlled by a managing director application.

[0065] For a user of the system such as a content provider, a value of this application is that in creating content for viewers, there is provided a consistent set of reliable professional tools with which to shape their output. The convenience of the application being on a mobile device, such as a tablet, means that there is no specialist or expensive hardware required. Consumer-class devices can be used to provide professional grade material. This allows any event, no matter how casual, short or low-budget to benefit from the same production tools quality.

[0066] For an end user--i.e. an editor--content originating from any contributing source can be managed and controlled in a consistent way, meaning that it is easy to perform common tasks with content no matter what its origin. This frees the editor from having to be concerned with compatible frame rates, stream qualities, synchronisation issues, behaviour of overlays across transitions from one camera to another.

[0067] A single contributor may be able to create an event with a number of different devices contributing live content and from which a director edits together, in real time, at least one published output stream. While there may be a single main published stream, the director may also choose one or more raw camera feeds for viewers to switch to.

[0068] A use case may be a church which wishes to broadcast an event. Two cameras are set up on the speaker, another two on the audience, and final one on a speaker at an off-site location. During the event the director publishes a stream that uses one or other shot of the speaker, interspersed with reaction shots from the congregation. For one segment of the event the director may switch to the off-site speaker. Throughout, the director allows viewers to watch either the main stream of any one of the other streams except the off-site speaker.

[0069] Another use case is where a breaking news story has brought a number of reporters to various locations around an event. A director has feeds from either of these, plus from a studio. As events unfold the director wants to bring content from the filed into the news programme by switching between the studio and one of the reporters.

[0070] For a customer, there is less reliance on a single point of view being sufficient to capture the mood or full range of content from an event. Greater flexibility is provided when comprising a published stream. If one cameras video becomes uninteresting, a switch can be made to another camera feed and all the viewers can be brought along.

[0071] For a contributor end user there is an ability to set up a range of shots rather than having to be a moving cameraman. Each camera can be configured for optimal video capture in its given location meaning it is possible to concentrate on content capture, not on by momentary setting adjustments.

[0072] For a viewer content is more interesting. There are fewer moments where action slackens off. The overall feel provided is more like television, but with the added advantage of being able to see real-time behind the scenes and/or alternative angle content.

[0073] Another example is the use of metrics that can be offered to a single contributor through the camera application or a user of a director application using the professional publishing application. This can allow maintenance of recording consistency across a number of remotely controlled devices for example.

[0074] Example metrics are ambient noise level indicator, white balance, colour saturation, light meter etc. A content capture monitor can be provided that can be used to allow a better set-up and configuration or on-the-fly adjustments of capture properties.

[0075] For the customer, this brings studio quality controls to ad-hoc video capture, meaning that the same production values, planning and directorial control can be deployed allowing for smooth re-use of talent and/or recording resources.

[0076] For the contributor, a previously unavailable level of shot configuration can be provided for a professional recording session.

[0077] All the examples and embodiments described herein may be implemented as processed in software. When implemented as processes in software, the processes (or methods) may be provided as executable code which, when run on a device having computer capability, implements a process or method as described. The execute code may be stored on a computer device, or may be stored on a memory and may be connected to or downloaded to a computer device.

[0078] Examples and embodiments are described herein, and any part of any example or embodiment may be combined with any part of any other example or embodiment. Parts of example are embodiments are not limited to being implemented in combination with a part of any other example or embodiment described. Features described are not limited to being only in the combination as presented.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed