Tag Archives: ffmpeg

ffmpeg command yuv toolkit for raw image/video

I was involved in driver development for a sensor there I faced this problem of viewing raw data:

ffmpeg -t 30 -f v4l2 -channel 0 -video_size 752×480 -i /dev/video0 -pix_fmt nv12 -r 30 -b:v 64k -c:v copy Desktop/raw.nut

save the raw file to play the raw file with vlc use below command

“C:\Program Files (x86)\VideoLAN\VLC\vlc.exe” –colorthres-saturationthres=0 “C:\Users\nandan\Desktop\raw.nut”

run raw2_video.nut with vlc and change the saturation to zero.
in vlc Goto:
tools>effects and filter>video effects>
tick image adjust
set saturaion to 0.

For raw image viewing yuv toolkit is really good tried both on Ubuntu and windows

set resolution to your image size. If the image name has resolution in it yuv toolkit can pick that. Set Image format 8bit grayscale or whatever.



ffmpeg: libavformat, libavcodec and x264

ffmpeg is great tool for almost all your video processing playing need. For general purpose it has everything given as command line. Most of the time you only need to find the correct command for your specific requirement.
Rarely you need more control on your video like encoding in different library and then muxing using ffmpeg. Or say you are streaming on network and it is not of any particular format. You don’t want the overhead of coding to make it of some format and then stream and decode via ffmpeg. In those scenario you have to get away with ffmpeg wrapper and just want to do it in your own way by using core API’s.

ffmpeg is easy in that way also once you get the feel of it. There are just couple of functionality which you need to achieve all this. Sometime you don’t even need all ffmpeg and just getaway with libavcodec swscale etc.

If you have the input file for processing you can just check out the sample code provided. You just have to set the input file for reading and you are done. If your requirement is providing your own input and also getting output in code and not in a file or stream.
I wrote my own wrapper on ffmpeg for windows to get acess to core API I needed.

Once you have the wrapper say ffmpeg_wrapper.c just compile it like

gcc -I. -c -o ffmpeg_wrapper.o ffmpeg_wrapper.c
gcc -shared -o ffmpeg_wrapper.dll ffmpeg_wrapper.o -L. -lavformat -lavcodec -lavutil -lWs2_32 -liconv

any other library you need.

Include are done like this:

#include "libavformat/avformat.h"
#include "libavutil/opt.h"
#include "libavutil/mathematics.h"
#include "libavutil/timestamp.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"

#define DLLEXPORT __declspec(dllexport)
#define CDECL __cdecl

I faced a problem while I was doing encoding via separate h264 dll and then passing the input for muxing via ffmpeg. The problem was with timing I know there are some option in h264 to set the timing info but it was not working out. So I generated the timing using ffmpeg itself.

static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
/* rescale output packet timestamp values from codec to stream timebase */
pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);
pkt->stream_index = st->index;</code>

/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);


Windows Programming Multi language

Now a days I am doing some windows application development and I am loving it :).

I was not such a big fan of windows ever. I can understand the reasons for most of the issues which make windows not so great. Like they have to support so many verities of hardware so many versions and so many applications and all.

Anyhow I noticed that windows has very user-friendly development environment. There is less open source projects and less help for obvious reasons. This is kind of bottleneck but when I learned about dllimport I was like… ahhh great now I can do whatever I want. Developing GUI and simple stuff in C# and then using dllexport for existing libraries by just writing a wrapper file around the library. From there onward I have used it so much in all kind of development.

To explain usefulness of  dllexport I will use an example where I have to do some Video encoding and streaming. You already have great open source project for that. I want to integrate this with my C# application. For X264 encoding for sending data from network programming in C++. Used libavcodec libavformat to convert between formats muxing video data.

C/C++ Part

I will explain with one simple example how to use dllexport to create cool windows project here.

Let us first take a simple example, say I want to use a C function defined below:

int func(int arg)
  int result = 0;
//some kind of processing
  //may call other defined C functions
  return result;

Say above function call along with all useful stuff is in some file example.c. It may be in multiple files also. We just want to use the func call in C#. We will redefine the function as below:

__declspec(dllexport) int  __cdecl func(int arg)
//same stuff

To be user-friendly we can define two macros as below:

#define DLLEXPORT __declspec(dllexport)
#define CDECL __cdecl

Now our function will look pretty good:

DLLEXPORT int  CDECL func(int arg)
  //same stuff

by //same stuff I mean the function body of func
Now we compile all c code as we were doing earlier but this time we compile it to create a dll library. GCC provides command to do so:
First compile all the files including example.c file and get the object files

gcc -c -o example.o example.c

gcc <strong>-shared</strong> -o library.dll example.o other_files.o other_libraries.a

other_files.o other_libraries.a are optional only required if your C project is big and uses multiple files and libraries. We will see it in next example when using X264 for encoding from C# project.

C# Part

We are almost done Now we just need to write our C# code and wherever in C# we want to use the function(func) from example.c we first declare the function as below:

static extern int func(int arg);

Now we are free to use this function in our C# code just like any other function.
That’s all so simple.
Now let us check one example where we will use libx264. We can do the same for ffmpeg by creating the ffmpeg dll. Sometime when there is problem of passing one struct variable from one C function to another C function. Say you want to use ffmpeg from one side while you also want to use X264. Since in C# we can’t just define these struct we will use IntPtr whenever there is any such requirement. This generally comes very handy in some cases.
I guess I will do another post for this as this post is already long.
Scratch Pad:

gcc -shared -o libmpegts.dll main.o libmpegts.a
gcc -I. -c -o tsmuxer.o tsmuxer.c

gcc -shared -o tsmuxer.dll tsmuxer.o -L. -lavformat -lavcodec -lavutil -lWs2
_32 -liconv


static extern Boolean Beep(UInt32 frequency, UInt32 duration);

[DllImport("libx264", CallingConvention = CallingConvention.Cdecl)]
private static extern IntPtr initializePicOut();

DLLEXPORT x264_picture_t* CDECL initializePicOut()

DLLEXPORT x264_t* CDECL setX264Params(int width, int height, int FPS)
printf("setX264Params width: %d, height: %%d FPS: %d.\n", width, height, FPS);
x264_param_t param;
int res = 0;
res = x264_param_default_preset(¶m, "veryfast", "zerolatency");
if(res != 0) {
printf("error: cannot set the default pre-set on x264.\n");
return -1;
param.i_threads = 1;
param.i_width = width;
param.i_height = height;
param.i_fps_num = FPS;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = FPS;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = FPS-5;
param.rc.f_rf_constant_max = FPS + 5;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
res = x264_param_apply_profile(¶m, "baseline");
if(res != 0) {
printf("error: cannot set the baseline profile on x264.\n");
return -2;