4 Protocols are configured elements in Libav which allow to access
5 resources which require the use of a particular protocol.
7 When you configure your Libav build, all the supported protocols are
8 enabled by default. You can list all available ones using the
9 configure option "--list-protocols".
11 You can disable all the protocols using the configure option
12 "--disable-protocols", and selectively enable a protocol using the
13 option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14 particular protocol using the option
15 "--disable-protocol=@var{PROTOCOL}".
17 The option "-protocols" of the ff* tools will display the list of
20 A description of the currently available protocols follows.
24 Physical concatenation protocol.
26 Allow to read and seek from many resource in sequence as if they were
29 A URL accepted by this protocol has the syntax:
31 concat:@var{URL1}|@var{URL2}|...|@var{URLN}
34 where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35 resource to be concatenated, each one possibly specifying a distinct
38 For example to read a sequence of files @file{split1.mpeg},
39 @file{split2.mpeg}, @file{split3.mpeg} with @command{avplay} use the
42 avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
45 Note that you may need to escape the character "|" which is special for
52 Allow to read from or read to a file.
54 For example to read from a file @file{input.mpeg} with @command{avconv}
57 avconv -i file:input.mpeg output.mpeg
60 The ff* tools default to the file protocol, that is a resource
61 specified with the name "FILE.mpeg" is interpreted as the URL
70 Read Apple HTTP Live Streaming compliant segmented stream as
71 a uniform one. The M3U8 playlists describing the segments can be
72 remote HTTP resources or local files, accessed using the standard
74 The nested protocol is declared by specifying
75 "+@var{proto}" after the hls URI scheme name, where @var{proto}
76 is either "file" or "http".
79 hls+http://host/path/to/remote/resource.m3u8
80 hls+file://path/to/local/resource.m3u8
83 Using this protocol is discouraged - the hls demuxer should work
84 just as well (if not, please report the issues) and is more complete.
85 To use the hls demuxer instead, simply use the direct URLs to the
90 HTTP (Hyper Text Transfer Protocol).
94 MMS (Microsoft Media Server) protocol over TCP.
98 MMS (Microsoft Media Server) protocol over HTTP.
100 The required syntax is:
102 mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
109 Computes the MD5 hash of the data to be written, and on close writes
110 this to the designated output or stdout if none is specified. It can
111 be used to test muxers without writing an actual file.
113 Some examples follow.
115 # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
116 avconv -i input.flv -f avi -y md5:output.avi.md5
118 # Write the MD5 hash of the encoded AVI file to stdout.
119 avconv -i input.flv -f avi -y md5:
122 Note that some formats (typically MOV) require the output protocol to
123 be seekable, so they will fail with the MD5 output protocol.
127 UNIX pipe access protocol.
129 Allow to read and write from UNIX pipes.
131 The accepted syntax is:
136 @var{number} is the number corresponding to the file descriptor of the
137 pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
138 is not specified, by default the stdout file descriptor will be used
139 for writing, stdin for reading.
141 For example to read from stdin with @command{avconv}:
143 cat test.wav | avconv -i pipe:0
144 # ...this is the same as...
145 cat test.wav | avconv -i pipe:
148 For writing to stdout with @command{avconv}:
150 avconv -i test.wav -f avi pipe:1 | cat > test.avi
151 # ...this is the same as...
152 avconv -i test.wav -f avi pipe: | cat > test.avi
155 Note that some formats (typically MOV), require the output protocol to
156 be seekable, so they will fail with the pipe output protocol.
160 Real-Time Messaging Protocol.
162 The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
163 content across a TCP/IP network.
165 The required syntax is:
167 rtmp://@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
170 The accepted parameters are:
174 The address of the RTMP server.
177 The number of the TCP port to use (by default is 1935).
180 It is the name of the application to access. It usually corresponds to
181 the path where the application is installed on the RTMP server
182 (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
183 the value parsed from the URI through the @code{rtmp_app} option, too.
186 It is the path or name of the resource to play with reference to the
187 application specified in @var{app}, may be prefixed by "mp4:". You
188 can override the value parsed from the URI through the @code{rtmp_playpath}
192 Act as a server, listening for an incoming connection.
195 Maximum time to wait for the incoming connection. Implies listen.
198 Additionally, the following parameters can be set via command line options
199 (or in code via @code{AVOption}s):
203 Name of application to connect on the RTMP server. This option
204 overrides the parameter specified in the URI.
207 Set the client buffer time in milliseconds. The default is 3000.
210 Extra arbitrary AMF connection parameters, parsed from a string,
211 e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
212 Each value is prefixed by a single character denoting the type,
213 B for Boolean, N for number, S for string, O for object, or Z for null,
214 followed by a colon. For Booleans the data must be either 0 or 1 for
215 FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
216 1 to end or begin an object, respectively. Data items in subobjects may
217 be named, by prefixing the type with 'N' and specifying the name before
218 the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
219 times to construct arbitrary AMF sequences.
222 Version of the Flash plugin used to run the SWF player. The default
225 @item rtmp_flush_interval
226 Number of packets flushed in the same request (RTMPT only). The default
230 Specify that the media is a live stream. No resuming or seeking in
231 live streams is possible. The default value is @code{any}, which means the
232 subscriber first tries to play the live stream specified in the
233 playpath. If a live stream of that name is not found, it plays the
234 recorded stream. The other possible values are @code{live} and
238 URL of the web page in which the media was embedded. By default no
242 Stream identifier to play or to publish. This option overrides the
243 parameter specified in the URI.
246 Name of live stream to subscribe to. By default no value will be sent.
247 It is only sent if the option is specified or if rtmp_live
251 SHA256 hash of the decompressed SWF file (32 bytes).
254 Size of the decompressed SWF file, required for SWFVerification.
257 URL of the SWF player for the media. By default no value will be sent.
260 URL to player swf file, compute hash/size automatically.
263 URL of the target stream. Defaults to proto://host[:port]/app.
267 For example to read with @command{avplay} a multimedia resource named
268 "sample" from the application "vod" from an RTMP server "myserver":
270 avplay rtmp://myserver/vod/sample
275 Encrypted Real-Time Messaging Protocol.
277 The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
278 streaming multimedia content within standard cryptographic primitives,
279 consisting of Diffie-Hellman key exchange and HMACSHA256, generating
284 Real-Time Messaging Protocol over a secure SSL connection.
286 The Real-Time Messaging Protocol (RTMPS) is used for streaming
287 multimedia content across an encrypted connection.
291 Real-Time Messaging Protocol tunneled through HTTP.
293 The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
294 for streaming multimedia content within HTTP requests to traverse
299 Encrypted Real-Time Messaging Protocol tunneled through HTTP.
301 The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
302 is used for streaming multimedia content within HTTP requests to traverse
307 Real-Time Messaging Protocol tunneled through HTTPS.
309 The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
310 for streaming multimedia content within HTTPS requests to traverse
313 @section rtmp, rtmpe, rtmps, rtmpt, rtmpte
315 Real-Time Messaging Protocol and its variants supported through
318 Requires the presence of the librtmp headers and library during
319 configuration. You need to explicitly configure the build with
320 "--enable-librtmp". If enabled this will replace the native RTMP
323 This protocol provides most client functions and a few server
324 functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
325 encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
326 variants of these encrypted types (RTMPTE, RTMPTS).
328 The required syntax is:
330 @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
333 where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
334 "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
335 @var{server}, @var{port}, @var{app} and @var{playpath} have the same
336 meaning as specified for the RTMP native protocol.
337 @var{options} contains a list of space-separated options of the form
340 See the librtmp manual page (man 3 librtmp) for more information.
342 For example, to stream a file in real-time to an RTMP server using
345 avconv -re -i myfile -f flv rtmp://myserver/live/mystream
348 To play the same stream using @command{avplay}:
350 avplay "rtmp://myserver/live/mystream live=1"
359 RTSP is not technically a protocol handler in libavformat, it is a demuxer
360 and muxer. The demuxer supports both normal RTSP (with data transferred
361 over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
362 data transferred over RDT).
364 The muxer can be used to send a stream using RTSP ANNOUNCE to a server
365 supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
366 @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
368 The required syntax for a RTSP url is:
370 rtsp://@var{hostname}[:@var{port}]/@var{path}
373 The following options (set on the @command{avconv}/@command{avplay} command
374 line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
377 Flags for @code{rtsp_transport}:
382 Use UDP as lower transport protocol.
385 Use TCP (interleaving within the RTSP control channel) as lower
389 Use UDP multicast as lower transport protocol.
392 Use HTTP tunneling as lower transport protocol, which is useful for
396 Multiple lower transport protocols may be specified, in that case they are
397 tried one at a time (if the setup of one fails, the next one is tried).
398 For the muxer, only the @code{tcp} and @code{udp} options are supported.
400 Flags for @code{rtsp_flags}:
404 Accept packets only from negotiated peer address and port.
406 Act as a server, listening for an incoming connection.
409 When receiving data over UDP, the demuxer tries to reorder received packets
410 (since they may arrive out of order, or packets may get lost totally). This
411 can be disabled by setting the maximum demuxing delay to zero (via
412 the @code{max_delay} field of AVFormatContext).
414 When watching multi-bitrate Real-RTSP streams with @command{avplay}, the
415 streams to display can be chosen with @code{-vst} @var{n} and
416 @code{-ast} @var{n} for video and audio respectively, and can be switched
417 on the fly by pressing @code{v} and @code{a}.
419 Example command lines:
421 To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
424 avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
427 To watch a stream tunneled over HTTP:
430 avplay -rtsp_transport http rtsp://server/video.mp4
433 To send a stream in realtime to a RTSP server, for others to watch:
436 avconv -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
439 To receive a stream in realtime:
442 avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
447 Session Announcement Protocol (RFC 2974). This is not technically a
448 protocol handler in libavformat, it is a muxer and demuxer.
449 It is used for signalling of RTP streams, by announcing the SDP for the
450 streams regularly on a separate port.
454 The syntax for a SAP url given to the muxer is:
456 sap://@var{destination}[:@var{port}][?@var{options}]
459 The RTP packets are sent to @var{destination} on port @var{port},
460 or to port 5004 if no port is specified.
461 @var{options} is a @code{&}-separated list. The following options
466 @item announce_addr=@var{address}
467 Specify the destination IP address for sending the announcements to.
468 If omitted, the announcements are sent to the commonly used SAP
469 announcement multicast address 224.2.127.254 (sap.mcast.net), or
470 ff0e::2:7ffe if @var{destination} is an IPv6 address.
472 @item announce_port=@var{port}
473 Specify the port to send the announcements on, defaults to
474 9875 if not specified.
477 Specify the time to live value for the announcements and RTP packets,
480 @item same_port=@var{0|1}
481 If set to 1, send all RTP streams on the same port pair. If zero (the
482 default), all streams are sent on unique ports, with each stream on a
483 port 2 numbers higher than the previous.
484 VLC/Live555 requires this to be set to 1, to be able to receive the stream.
485 The RTP stack in libavformat for receiving requires all streams to be sent
489 Example command lines follow.
491 To broadcast a stream on the local subnet, for watching in VLC:
494 avconv -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
497 Similarly, for watching in avplay:
500 avconv -re -i @var{input} -f sap sap://224.0.0.255
503 And for watching in avplay, over IPv6:
506 avconv -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
511 The syntax for a SAP url given to the demuxer is:
513 sap://[@var{address}][:@var{port}]
516 @var{address} is the multicast address to listen for announcements on,
517 if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
518 is the port that is listened on, 9875 if omitted.
520 The demuxers listens for announcements on the given address and port.
521 Once an announcement is received, it tries to receive that particular stream.
523 Example command lines follow.
525 To play back the first stream announced on the normal SAP multicast address:
531 To play back the first stream announced on one the default IPv6 SAP multicast address:
534 avplay sap://[ff0e::2:7ffe]
539 Trasmission Control Protocol.
541 The required syntax for a TCP url is:
543 tcp://@var{hostname}:@var{port}[?@var{options}]
549 Listen for an incoming connection
552 avconv -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
553 avplay tcp://@var{hostname}:@var{port}
560 User Datagram Protocol.
562 The required syntax for a UDP url is:
564 udp://@var{hostname}:@var{port}[?@var{options}]
567 @var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
568 Follow the list of supported options.
572 @item buffer_size=@var{size}
573 set the UDP buffer size in bytes
575 @item localport=@var{port}
576 override the local UDP port to bind with
578 @item localaddr=@var{addr}
579 Choose the local IP address. This is useful e.g. if sending multicast
580 and the host has multiple interfaces, where the user can choose
581 which interface to send on by specifying the IP address of that interface.
583 @item pkt_size=@var{size}
584 set the size in bytes of UDP packets
586 @item reuse=@var{1|0}
587 explicitly allow or disallow reusing UDP sockets
590 set the time to live value (for multicast only)
592 @item connect=@var{1|0}
593 Initialize the UDP socket with @code{connect()}. In this case, the
594 destination address can't be changed with ff_udp_set_remote_url later.
595 If the destination address isn't known at the start, this option can
596 be specified in ff_udp_set_remote_url, too.
597 This allows finding out the source address for the packets with getsockname,
598 and makes writes return with AVERROR(ECONNREFUSED) if "destination
599 unreachable" is received.
600 For receiving, this gives the benefit of only receiving packets from
601 the specified peer address/port.
603 @item sources=@var{address}[,@var{address}]
604 Only receive packets sent to the multicast group from one of the
605 specified sender IP addresses.
607 @item block=@var{address}[,@var{address}]
608 Ignore packets sent to the multicast group from the specified
612 Some usage examples of the udp protocol with @command{avconv} follow.
614 To stream over UDP to a remote endpoint:
616 avconv -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
619 To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
621 avconv -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
624 To receive over UDP from a remote endpoint:
626 avconv -i udp://[@var{multicast-address}]:@var{port}