blob: ff35c898dc726adca6d441716075518ccd305acc
1 | \input texinfo @c -*- texinfo -*- |
2 | @documentencoding UTF-8 |
3 | |
4 | @settitle FFmpeg FAQ |
5 | @titlepage |
6 | @center @titlefont{FFmpeg FAQ} |
7 | @end titlepage |
8 | |
9 | @top |
10 | |
11 | @contents |
12 | |
13 | @chapter General Questions |
14 | |
15 | @section Why doesn't FFmpeg support feature [xyz]? |
16 | |
17 | Because no one has taken on that task yet. FFmpeg development is |
18 | driven by the tasks that are important to the individual developers. |
19 | If there is a feature that is important to you, the best way to get |
20 | it implemented is to undertake the task yourself or sponsor a developer. |
21 | |
22 | @section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it? |
23 | |
24 | No. Windows DLLs are not portable, bloated and often slow. |
25 | Moreover FFmpeg strives to support all codecs natively. |
26 | A DLL loader is not conducive to that goal. |
27 | |
28 | @section I cannot read this file although this format seems to be supported by ffmpeg. |
29 | |
30 | Even if ffmpeg can read the container format, it may not support all its |
31 | codecs. Please consult the supported codec list in the ffmpeg |
32 | documentation. |
33 | |
34 | @section Which codecs are supported by Windows? |
35 | |
36 | Windows does not support standard formats like MPEG very well, unless you |
37 | install some additional codecs. |
38 | |
39 | The following list of video codecs should work on most Windows systems: |
40 | @table @option |
41 | @item msmpeg4v2 |
42 | .avi/.asf |
43 | @item msmpeg4 |
44 | .asf only |
45 | @item wmv1 |
46 | .asf only |
47 | @item wmv2 |
48 | .asf only |
49 | @item mpeg4 |
50 | Only if you have some MPEG-4 codec like ffdshow or Xvid installed. |
51 | @item mpeg1video |
52 | .mpg only |
53 | @end table |
54 | Note, ASF files often have .wmv or .wma extensions in Windows. It should also |
55 | be mentioned that Microsoft claims a patent on the ASF format, and may sue |
56 | or threaten users who create ASF files with non-Microsoft software. It is |
57 | strongly advised to avoid ASF where possible. |
58 | |
59 | The following list of audio codecs should work on most Windows systems: |
60 | @table @option |
61 | @item adpcm_ima_wav |
62 | @item adpcm_ms |
63 | @item pcm_s16le |
64 | always |
65 | @item libmp3lame |
66 | If some MP3 codec like LAME is installed. |
67 | @end table |
68 | |
69 | |
70 | @chapter Compilation |
71 | |
72 | @section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'} |
73 | |
74 | This is a bug in gcc. Do not report it to us. Instead, please report it to |
75 | the gcc developers. Note that we will not add workarounds for gcc bugs. |
76 | |
77 | Also note that (some of) the gcc developers believe this is not a bug or |
78 | not a bug they should fix: |
79 | @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}. |
80 | Then again, some of them do not know the difference between an undecidable |
81 | problem and an NP-hard problem... |
82 | |
83 | @section I have installed this library with my distro's package manager. Why does @command{configure} not see it? |
84 | |
85 | Distributions usually split libraries in several packages. The main package |
86 | contains the files necessary to run programs using the library. The |
87 | development package contains the files necessary to build programs using the |
88 | library. Sometimes, docs and/or data are in a separate package too. |
89 | |
90 | To build FFmpeg, you need to install the development package. It is usually |
91 | called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the |
92 | build is finished, but be sure to keep the main package. |
93 | |
94 | @section How do I make @command{pkg-config} find my libraries? |
95 | |
96 | Somewhere along with your libraries, there is a @file{.pc} file (or several) |
97 | in a @file{pkgconfig} directory. You need to set environment variables to |
98 | point @command{pkg-config} to these files. |
99 | |
100 | If you need to @emph{add} directories to @command{pkg-config}'s search list |
101 | (typical use case: library installed separately), add it to |
102 | @code{$PKG_CONFIG_PATH}: |
103 | |
104 | @example |
105 | export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig |
106 | @end example |
107 | |
108 | If you need to @emph{replace} @command{pkg-config}'s search list |
109 | (typical use case: cross-compiling), set it in |
110 | @code{$PKG_CONFIG_LIBDIR}: |
111 | |
112 | @example |
113 | export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig |
114 | @end example |
115 | |
116 | If you need to know the library's internal dependencies (typical use: static |
117 | linking), add the @code{--static} option to @command{pkg-config}: |
118 | |
119 | @example |
120 | ./configure --pkg-config-flags=--static |
121 | @end example |
122 | |
123 | @section How do I use @command{pkg-config} when cross-compiling? |
124 | |
125 | The best way is to install @command{pkg-config} in your cross-compilation |
126 | environment. It will automatically use the cross-compilation libraries. |
127 | |
128 | You can also use @command{pkg-config} from the host environment by |
129 | specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}. |
130 | In that case, you must point @command{pkg-config} to the correct directories |
131 | using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry. |
132 | |
133 | As an intermediate solution, you can place in your cross-compilation |
134 | environment a script that calls the host @command{pkg-config} with |
135 | @code{PKG_CONFIG_LIBDIR} set. That script can look like that: |
136 | |
137 | @example |
138 | #!/bin/sh |
139 | PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig |
140 | export PKG_CONFIG_LIBDIR |
141 | exec /usr/bin/pkg-config "$@@" |
142 | @end example |
143 | |
144 | @chapter Usage |
145 | |
146 | @section ffmpeg does not work; what is wrong? |
147 | |
148 | Try a @code{make distclean} in the ffmpeg source directory before the build. |
149 | If this does not help see |
150 | (@url{https://ffmpeg.org/bugreports.html}). |
151 | |
152 | @section How do I encode single pictures into movies? |
153 | |
154 | First, rename your pictures to follow a numerical sequence. |
155 | For example, img1.jpg, img2.jpg, img3.jpg,... |
156 | Then you may run: |
157 | |
158 | @example |
159 | ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg |
160 | @end example |
161 | |
162 | Notice that @samp{%d} is replaced by the image number. |
163 | |
164 | @file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc. |
165 | |
166 | Use the @option{-start_number} option to declare a starting number for |
167 | the sequence. This is useful if your sequence does not start with |
168 | @file{img001.jpg} but is still in a numerical order. The following |
169 | example will start with @file{img100.jpg}: |
170 | |
171 | @example |
172 | ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg |
173 | @end example |
174 | |
175 | If you have large number of pictures to rename, you can use the |
176 | following command to ease the burden. The command, using the bourne |
177 | shell syntax, symbolically links all files in the current directory |
178 | that match @code{*jpg} to the @file{/tmp} directory in the sequence of |
179 | @file{img001.jpg}, @file{img002.jpg} and so on. |
180 | |
181 | @example |
182 | x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done |
183 | @end example |
184 | |
185 | If you want to sequence them by oldest modified first, substitute |
186 | @code{$(ls -r -t *jpg)} in place of @code{*jpg}. |
187 | |
188 | Then run: |
189 | |
190 | @example |
191 | ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg |
192 | @end example |
193 | |
194 | The same logic is used for any image format that ffmpeg reads. |
195 | |
196 | You can also use @command{cat} to pipe images to ffmpeg: |
197 | |
198 | @example |
199 | cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg |
200 | @end example |
201 | |
202 | @section How do I encode movie to single pictures? |
203 | |
204 | Use: |
205 | |
206 | @example |
207 | ffmpeg -i movie.mpg movie%d.jpg |
208 | @end example |
209 | |
210 | The @file{movie.mpg} used as input will be converted to |
211 | @file{movie1.jpg}, @file{movie2.jpg}, etc... |
212 | |
213 | Instead of relying on file format self-recognition, you may also use |
214 | @table @option |
215 | @item -c:v ppm |
216 | @item -c:v png |
217 | @item -c:v mjpeg |
218 | @end table |
219 | to force the encoding. |
220 | |
221 | Applying that to the previous example: |
222 | @example |
223 | ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg |
224 | @end example |
225 | |
226 | Beware that there is no "jpeg" codec. Use "mjpeg" instead. |
227 | |
228 | @section Why do I see a slight quality degradation with multithreaded MPEG* encoding? |
229 | |
230 | For multithreaded MPEG* encoding, the encoded slices must be independent, |
231 | otherwise thread n would practically have to wait for n-1 to finish, so it's |
232 | quite logical that there is a small reduction of quality. This is not a bug. |
233 | |
234 | @section How can I read from the standard input or write to the standard output? |
235 | |
236 | Use @file{-} as file name. |
237 | |
238 | @section -f jpeg doesn't work. |
239 | |
240 | Try '-f image2 test%d.jpg'. |
241 | |
242 | @section Why can I not change the frame rate? |
243 | |
244 | Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates. |
245 | Choose a different codec with the -c:v command line option. |
246 | |
247 | @section How do I encode Xvid or DivX video with ffmpeg? |
248 | |
249 | Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4 |
250 | standard (note that there are many other coding formats that use this |
251 | same standard). Thus, use '-c:v mpeg4' to encode in these formats. The |
252 | default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want |
253 | a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will |
254 | force the fourcc 'xvid' to be stored as the video fourcc rather than the |
255 | default. |
256 | |
257 | @section Which are good parameters for encoding high quality MPEG-4? |
258 | |
259 | '-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2', |
260 | things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'. |
261 | |
262 | @section Which are good parameters for encoding high quality MPEG-1/MPEG-2? |
263 | |
264 | '-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2' |
265 | but beware the '-g 100' might cause problems with some decoders. |
266 | Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd. |
267 | |
268 | @section Interlaced video looks very bad when encoded with ffmpeg, what is wrong? |
269 | |
270 | You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced |
271 | material, and try '-top 0/1' if the result looks really messed-up. |
272 | |
273 | @section How can I read DirectShow files? |
274 | |
275 | If you have built FFmpeg with @code{./configure --enable-avisynth} |
276 | (only possible on MinGW/Cygwin platforms), |
277 | then you may use any file that DirectShow can read as input. |
278 | |
279 | Just create an "input.avs" text file with this single line ... |
280 | @example |
281 | DirectShowSource("C:\path to your file\yourfile.asf") |
282 | @end example |
283 | ... and then feed that text file to ffmpeg: |
284 | @example |
285 | ffmpeg -i input.avs |
286 | @end example |
287 | |
288 | For ANY other help on AviSynth, please visit the |
289 | @uref{http://www.avisynth.org/, AviSynth homepage}. |
290 | |
291 | @section How can I join video files? |
292 | |
293 | To "join" video files is quite ambiguous. The following list explains the |
294 | different kinds of "joining" and points out how those are addressed in |
295 | FFmpeg. To join video files may mean: |
296 | |
297 | @itemize |
298 | |
299 | @item |
300 | To put them one after the other: this is called to @emph{concatenate} them |
301 | (in short: concat) and is addressed |
302 | @ref{How can I concatenate video files, in this very faq}. |
303 | |
304 | @item |
305 | To put them together in the same file, to let the user choose between the |
306 | different versions (example: different audio languages): this is called to |
307 | @emph{multiplex} them together (in short: mux), and is done by simply |
308 | invoking ffmpeg with several @option{-i} options. |
309 | |
310 | @item |
311 | For audio, to put all channels together in a single stream (example: two |
312 | mono streams into one stereo stream): this is sometimes called to |
313 | @emph{merge} them, and can be done using the |
314 | @url{ffmpeg-filters.html#amerge, @code{amerge}} filter. |
315 | |
316 | @item |
317 | For audio, to play one on top of the other: this is called to @emph{mix} |
318 | them, and can be done by first merging them into a single stream and then |
319 | using the @url{ffmpeg-filters.html#pan, @code{pan}} filter to mix |
320 | the channels at will. |
321 | |
322 | @item |
323 | For video, to display both together, side by side or one on top of a part of |
324 | the other; it can be done using the |
325 | @url{ffmpeg-filters.html#overlay, @code{overlay}} video filter. |
326 | |
327 | @end itemize |
328 | |
329 | @anchor{How can I concatenate video files} |
330 | @section How can I concatenate video files? |
331 | |
332 | There are several solutions, depending on the exact circumstances. |
333 | |
334 | @subsection Concatenating using the concat @emph{filter} |
335 | |
336 | FFmpeg has a @url{ffmpeg-filters.html#concat, |
337 | @code{concat}} filter designed specifically for that, with examples in the |
338 | documentation. This operation is recommended if you need to re-encode. |
339 | |
340 | @subsection Concatenating using the concat @emph{demuxer} |
341 | |
342 | FFmpeg has a @url{ffmpeg-formats.html#concat, |
343 | @code{concat}} demuxer which you can use when you want to avoid a re-encode and |
344 | your format doesn't support file level concatenation. |
345 | |
346 | @subsection Concatenating using the concat @emph{protocol} (file level) |
347 | |
348 | FFmpeg has a @url{ffmpeg-protocols.html#concat, |
349 | @code{concat}} protocol designed specifically for that, with examples in the |
350 | documentation. |
351 | |
352 | A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate |
353 | video by merely concatenating the files containing them. |
354 | |
355 | Hence you may concatenate your multimedia files by first transcoding them to |
356 | these privileged formats, then using the humble @code{cat} command (or the |
357 | equally humble @code{copy} under Windows), and finally transcoding back to your |
358 | format of choice. |
359 | |
360 | @example |
361 | ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg |
362 | ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg |
363 | cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg |
364 | ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi |
365 | @end example |
366 | |
367 | Additionally, you can use the @code{concat} protocol instead of @code{cat} or |
368 | @code{copy} which will avoid creation of a potentially huge intermediate file. |
369 | |
370 | @example |
371 | ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg |
372 | ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg |
373 | ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg |
374 | ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi |
375 | @end example |
376 | |
377 | Note that you may need to escape the character "|" which is special for many |
378 | shells. |
379 | |
380 | Another option is usage of named pipes, should your platform support it: |
381 | |
382 | @example |
383 | mkfifo intermediate1.mpg |
384 | mkfifo intermediate2.mpg |
385 | ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null & |
386 | ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null & |
387 | cat intermediate1.mpg intermediate2.mpg |\ |
388 | ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi |
389 | @end example |
390 | |
391 | @subsection Concatenating using raw audio and video |
392 | |
393 | Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also |
394 | allow concatenation, and the transcoding step is almost lossless. |
395 | When using multiple yuv4mpegpipe(s), the first line needs to be discarded |
396 | from all but the first stream. This can be accomplished by piping through |
397 | @code{tail} as seen below. Note that when piping through @code{tail} you |
398 | must use command grouping, @code{@{ ;@}}, to background properly. |
399 | |
400 | For example, let's say we want to concatenate two FLV files into an |
401 | output.flv file: |
402 | |
403 | @example |
404 | mkfifo temp1.a |
405 | mkfifo temp1.v |
406 | mkfifo temp2.a |
407 | mkfifo temp2.v |
408 | mkfifo all.a |
409 | mkfifo all.v |
410 | ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null & |
411 | ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null & |
412 | ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null & |
413 | @{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} & |
414 | cat temp1.a temp2.a > all.a & |
415 | cat temp1.v temp2.v > all.v & |
416 | ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ |
417 | -f yuv4mpegpipe -i all.v \ |
418 | -y output.flv |
419 | rm temp[12].[av] all.[av] |
420 | @end example |
421 | |
422 | @section Using @option{-f lavfi}, audio becomes mono for no apparent reason. |
423 | |
424 | Use @option{-dumpgraph -} to find out exactly where the channel layout is |
425 | lost. |
426 | |
427 | Most likely, it is through @code{auto-inserted aresample}. Try to understand |
428 | why the converting filter was needed at that place. |
429 | |
430 | Just before the output is a likely place, as @option{-f lavfi} currently |
431 | only support packed S16. |
432 | |
433 | Then insert the correct @code{aformat} explicitly in the filtergraph, |
434 | specifying the exact format. |
435 | |
436 | @example |
437 | aformat=sample_fmts=s16:channel_layouts=stereo |
438 | @end example |
439 | |
440 | @section Why does FFmpeg not see the subtitles in my VOB file? |
441 | |
442 | VOB and a few other formats do not have a global header that describes |
443 | everything present in the file. Instead, applications are supposed to scan |
444 | the file to see what it contains. Since VOB files are frequently large, only |
445 | the beginning is scanned. If the subtitles happen only later in the file, |
446 | they will not be initially detected. |
447 | |
448 | Some applications, including the @code{ffmpeg} command-line tool, can only |
449 | work with streams that were detected during the initial scan; streams that |
450 | are detected later are ignored. |
451 | |
452 | The size of the initial scan is controlled by two options: @code{probesize} |
453 | (default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For |
454 | the subtitle stream to be detected, both values must be large enough. |
455 | |
456 | @section Why was the @command{ffmpeg} @option{-sameq} option removed? What to use instead? |
457 | |
458 | The @option{-sameq} option meant "same quantizer", and made sense only in a |
459 | very limited set of cases. Unfortunately, a lot of people mistook it for |
460 | "same quality" and used it in places where it did not make sense: it had |
461 | roughly the expected visible effect, but achieved it in a very inefficient |
462 | way. |
463 | |
464 | Each encoder has its own set of options to set the quality-vs-size balance, |
465 | use the options for the encoder you are using to set the quality level to a |
466 | point acceptable for your tastes. The most common options to do that are |
467 | @option{-qscale} and @option{-qmax}, but you should peruse the documentation |
468 | of the encoder you chose. |
469 | |
470 | @section I have a stretched video, why does scaling does not fix it? |
471 | |
472 | A lot of video codecs and formats can store the @emph{aspect ratio} of the |
473 | video: this is the ratio between the width and the height of either the full |
474 | image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect |
475 | ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48 |
476 | SAR. |
477 | |
478 | Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot |
479 | of video standards, especially from the analogic-numeric transition era, use |
480 | non-square pixels. |
481 | |
482 | Most processing filters in FFmpeg handle the aspect ratio to avoid |
483 | stretching the image: cropping adjusts the DAR to keep the SAR constant, |
484 | scaling adjusts the SAR to keep the DAR constant. |
485 | |
486 | If you want to stretch, or “unstretch”, the image, you need to override the |
487 | information with the |
488 | @url{ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}. |
489 | |
490 | Do not forget to examine carefully the original video to check whether the |
491 | stretching comes from the image or from the aspect ratio information. |
492 | |
493 | For example, to fix a badly encoded EGA capture, use the following commands, |
494 | either the first one to upscale to square pixels or the second one to set |
495 | the correct aspect ratio or the third one to avoid transcoding (may not work |
496 | depending on the format / codec / player / phase of the moon): |
497 | |
498 | @example |
499 | ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut |
500 | ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut |
501 | ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut |
502 | @end example |
503 | |
504 | @chapter Development |
505 | |
506 | @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat? |
507 | |
508 | Yes. Check the @file{doc/examples} directory in the source |
509 | repository, also available online at: |
510 | @url{https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples}. |
511 | |
512 | Examples are also installed by default, usually in |
513 | @code{$PREFIX/share/ffmpeg/examples}. |
514 | |
515 | Also you may read the Developers Guide of the FFmpeg documentation. Alternatively, |
516 | examine the source code for one of the many open source projects that |
517 | already incorporate FFmpeg at (@url{projects.html}). |
518 | |
519 | @section Can you support my C compiler XXX? |
520 | |
521 | It depends. If your compiler is C99-compliant, then patches to support |
522 | it are likely to be welcome if they do not pollute the source code |
523 | with @code{#ifdef}s related to the compiler. |
524 | |
525 | @section Is Microsoft Visual C++ supported? |
526 | |
527 | Yes. Please see the @uref{platform.html, Microsoft Visual C++} |
528 | section in the FFmpeg documentation. |
529 | |
530 | @section Can you add automake, libtool or autoconf support? |
531 | |
532 | No. These tools are too bloated and they complicate the build. |
533 | |
534 | @section Why not rewrite FFmpeg in object-oriented C++? |
535 | |
536 | FFmpeg is already organized in a highly modular manner and does not need to |
537 | be rewritten in a formal object language. Further, many of the developers |
538 | favor straight C; it works for them. For more arguments on this matter, |
539 | read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}. |
540 | |
541 | @section Why are the ffmpeg programs devoid of debugging symbols? |
542 | |
543 | The build process creates @command{ffmpeg_g}, @command{ffplay_g}, etc. which |
544 | contain full debug information. Those binaries are stripped to create |
545 | @command{ffmpeg}, @command{ffplay}, etc. If you need the debug information, use |
546 | the *_g versions. |
547 | |
548 | @section I do not like the LGPL, can I contribute code under the GPL instead? |
549 | |
550 | Yes, as long as the code is optional and can easily and cleanly be placed |
551 | under #if CONFIG_GPL without breaking anything. So, for example, a new codec |
552 | or filter would be OK under GPL while a bug fix to LGPL code would not. |
553 | |
554 | @section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves. |
555 | |
556 | FFmpeg builds static libraries by default. In static libraries, dependencies |
557 | are not handled. That has two consequences. First, you must specify the |
558 | libraries in dependency order: @code{-lavdevice} must come before |
559 | @code{-lavformat}, @code{-lavutil} must come after everything else, etc. |
560 | Second, external libraries that are used in FFmpeg have to be specified too. |
561 | |
562 | An easy way to get the full list of required libraries in dependency order |
563 | is to use @code{pkg-config}. |
564 | |
565 | @example |
566 | c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec) |
567 | @end example |
568 | |
569 | See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for |
570 | more details. |
571 | |
572 | @section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available. |
573 | |
574 | FFmpeg is a pure C project, so to use the libraries within your C++ application |
575 | you need to explicitly state that you are using a C library. You can do this by |
576 | encompassing your FFmpeg includes using @code{extern "C"}. |
577 | |
578 | See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3} |
579 | |
580 | @section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope |
581 | |
582 | FFmpeg is a pure C project using C99 math features, in order to enable C++ |
583 | to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS |
584 | |
585 | @section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat? |
586 | |
587 | You have to create a custom AVIOContext using @code{avio_alloc_context}, |
588 | see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources. |
589 | |
590 | @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm? |
591 | |
592 | see @url{https://www.ffmpeg.org/~michael/} |
593 | |
594 | @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec? |
595 | |
596 | Even if peculiar since it is network oriented, RTP is a container like any |
597 | other. You have to @emph{demux} RTP before feeding the payload to libavcodec. |
598 | In this specific case please look at RFC 4629 to see how it should be done. |
599 | |
600 | @section AVStream.r_frame_rate is wrong, it is much larger than the frame rate. |
601 | |
602 | @code{r_frame_rate} is NOT the average frame rate, it is the smallest frame rate |
603 | that can accurately represent all timestamps. So no, it is not |
604 | wrong if it is larger than the average! |
605 | For example, if you have mixed 25 and 30 fps content, then @code{r_frame_rate} |
606 | will be 150 (it is the least common multiple). |
607 | If you are looking for the average frame rate, see @code{AVStream.avg_frame_rate}. |
608 | |
609 | @section Why is @code{make fate} not running all tests? |
610 | |
611 | Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable |
612 | or @code{FATE_SAMPLES} environment variable or the @code{--samples} |
613 | @command{configure} option is set to the right path. |
614 | |
615 | @section Why is @code{make fate} not finding the samples? |
616 | |
617 | Do you happen to have a @code{~} character in the samples path to indicate a |
618 | home directory? The value is used in ways where the shell cannot expand it, |
619 | causing FATE to not find files. Just replace @code{~} by the full path. |
620 | |
621 | @bye |
622 |